Search Results

Search found 4045 results on 162 pages for 'automatic maintenance'.

Page 130/162 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • MYSQL how to sum rows with same key, then delete the duplicate rows

    - by Bhante-S
    What I have: key data 1      22 1       5 2       6 3       1 3      -3 What I want: key data 1      27 2       6 3      -2 I don’t mind doing this with two or more queries, esp. if they are simple--makes for easier maintenance. Also the tables are fairly small (<2,000 records). The ‘key’ field is indexed and allows duplicates. Muchas Gracias

    Read the article

  • PHP suddenly failed after IIS update

    - by James Hay
    All my application pools were stopped this morning after I got to work. I can restart them, but when I try to load the website the app pool crashes again. Update: I've looked in the GAC as the error below suggests and it seems that the file is not there. How do I get it back? Update 2: I found a further error in the event log saying The Module name FastCgiModule path C:\WINDOWS\System32\inetsrv\iisfcgi.dll returned an error from registration. The data is the error. So following the information from here http://forums.iis.net/t/1153937.aspx I removed CGI and my sites are working again. This has fixed the initial problem, but now I don't have FastCGI so I'm fairly sure that PHP will no longer be working (I don't have any PHP at the moment to test). Original Post I'm getting this error in the event viewer: IISMANAGER_ERROR_LOADING_PROVIDER_TYPE IIS Manager could not load type 'Web.Management.PHP.PHPProvider, Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' for module provider 'PHP' that is declared in %windir%\system32\inetsrv\config\administration.config. Verify that the type is correct, and that the assembly that contains the module provider is in the Global Assembly Cache (GAC). Exception:System.IO.FileNotFoundException: Could not load file or assembly 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' or one of its dependencies. The system cannot find the file specified. File name: 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at Microsoft.Web.Management.Server.AdministrationModuleProvider.GetModuleProvider(String userName, String connectionName) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Process:InetMgr Connection:CT211511\Administrator Everything was working fine last night when I left work, and since they've done the maintenance it's all broken.

    Read the article

  • Partitioning recommendations for a Proxmox VM Server (OpenVZ)

    - by luison
    We are new to virtualization and we are planning to turn our online server into a virualized one, mainly for maintenance, backup and recovery improvements. Initially we would only have one real virtual system with load plus 1-3 copys for testing and recovering and maybe a small centralized syslog virtual machine. We would like, if possible the host machine to include an iptables plus rsync to back up to other machines and some other global security systems. Due to this and the offerings of our hosting supplier we are mainly considering Proxmox for its simplicity (we like the idea of its web admin panel) and as I also understand that the container approach of OpenVMZ systems may fit well resource wise with our setup. The base system comes with debian so we can personalise it to our requirements. Proxmox installations default installs an LVM partition for the VMs. Our doubts are with the fact of what would be the best partition structure for this considering that: we would like to have a mirror of the root partition we could boot from if required (our provider supports booting the system from another partition via control panel) we ideally would like to have a partition that could be shared among the VM systems. We still don't know if this is possible directly with OpenVMZ containers, otherwise we are considering doing this by sharing it via NFS on the host machine. we want to use the backup system available on the proxmox host administrator to programme VMs backups and then rsync it to another machine. With this based on a Linux Raid of aprox (750Gb) we are considering something like: ext3_1/ - (20Gb) ext3_2/bak_root - (20Gb) mostly unmounted, root partition sync LVM_1 /var/lib/vz - (390Gb) partition for virtual images LVM_2 /shared_data - (30Gb) LVM_3 /backups - (300Gb) where all backups would be allocated Our initial tests with Proxmox seem to have issues with snapshots backups like this, perhaps caused by the fact that they can not be done to another LVM partition (error: command 'lvcreate --size 1024M --snapshot --name vzsnap-ns204084.XXX.net-0 /dev/pve/LV' failed with exit code 5) in which case we might have to use a standart ext3 partition (but unsure if we can do this with the 4 primary partition limitations). Does this makes more or less sense? Would it be mad to for example write VMs /var/logs to a NFS mounted partition (on the host system)? Are their any other easier ways to mount host system partitions (or folders) to the VMs?

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Ubuntu web server 11.10 ftp/server issue

    - by Nate
    I was wondering if I could get some help with FTP, atleast I'm pretty sure it has to do with FTP. Although it could have to do with something else, I'm not 100% sure.. Now, for fare warning, I'm no ubuntu dominator, I'm pretty newb. Anyway, I've attempted to build a webserver to to test php and what not for a site I'm building. Now everything works, the php, the sql etc. By the way, I built this in VMware, so it's virtual, over a network, so I can access stuff from anywhere. I'm in a college right now so yeah. The one problem I have is this. I go into the terminal, and do ifconfig to find my IP. I get it and go to a browser on a different machine and type that IP in. I get the "index of/" page, where I can browse the website I'm making. I can click through folders and what not. I can click on things and they open up. Now lets say I'm working on my desktop and open up an FTP and drag and drop something into there, go to the IP in the browser again and try to open it. I either get "Server error The website encountered an error while retrieving http://my_server_ip/phpinfo.php. It may be down for maintenance or configured incorrectly. Here are some suggestions: Reload this webpage later." or "Forbidden You don't have permission to access /html.html on this server." But, lets say I make it on the server itself, and try, bam, magic it works. I'm sure I set the permissions to let everyone open and view the files, but maybe I didn't? I'm not sure, and this is where I was hoping I could get some help. By the way, I followed a tutorial on changing the www folder (apache) from /var/www to home/"user"/www. I can't recall how I did that, but it's there and my ftp goes to the home/"user"/www folder. But yeah, any and all help is appreciated. Like I said, I'm really new to this, but I do enjoy attempting to make these servers and learning how they work, so it's not like making this webserver is a project for a class, It's just assisting me in testing stuff for another class and possibly other websites later on down the road. Anyway, anyone who decides to help, thanks so much, I'd really appreciate it. Nate. P.S. I'm using Ubuntu 11.10 desktop edition with a LAMP server

    Read the article

  • LUKS with LVM, mount is not persistent after reboot

    - by linxsaga
    I have created a Logical vol and used luks to encrypt it. But while rebooting the server. I get a error message (below), therefore I would have to enter the root pass and disable the /etc/fstab entry. So mount of the LUKS partition is not persistent during reboot using LUKS. I have this setup on RHEL6 and wondering what i could be missing. I want to the LV to get be mount on reboot. Later I would want to replace it with UUID instead of the device name. Error message on reboot: "Give root password for maintenance (or type Control-D to continue):" Here are the steps from the beginning: [root@rhel6 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@rhel6 ~]# vgcreate vg01 /dev/sdb Volume group "vg01" successfully created [root@rhel6 ~]# lvcreate --size 500M -n lvol1 vg01 Logical volume "lvol1" created [root@rhel6 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg01/lvol1 VG Name vg01 LV UUID nX9DDe-ctqG-XCgO-2wcx-ddy4-i91Y-rZ5u91 LV Write Access read/write LV Status available # open 0 LV Size 500.00 MiB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 [root@rhel6 ~]# cryptsetup luksFormat /dev/vg01/lvol1 WARNING! ======== This will overwrite data on /dev/vg01/lvol1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: [root@rhel6 ~]# mkdir /house [root@rhel6 ~]# cryptsetup luksOpen /dev/vg01/lvol1 house Enter passphrase for /dev/vg01/lvol1: [root@rhel6 ~]# mkfs.ext4 /dev/mapper/house mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 127512 inodes, 509952 blocks 25497 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@rhel6 ~]# mount -t ext4 /dev/mapper/house /house PS: HERE I have successfully mounted: [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# vim /etc/fstab -> as follow /dev/mapper/house /house ext4 defaults 1 2 [root@rhel6 ~]# vim /etc/crypttab -> entry as follows house /dev/vg01/lvol1 password [root@rhel6 ~]# mount -o remount /house [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# umount /house/ [root@rhel6 ~]# mount -a -> SUCCESSFUL AGAIN [root@rhel6 ~]# ls /house/ lost+found Please let me know if I am missing anything here. Thanks in advance.

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • Centos 5.5 [Read-only file system] issue after rebooting

    - by canu johann
    I have a virtual server under centos 5.5 (hosted by a japanese company called sakura ) Since yesterday, connection through ssh couldn't be established. I've contacted support center who told me to restart VS from the control panel. After restarting, I got the message below Connected to domain wwwxxxxxx.sakura.ne.jp Escape character is ^] [ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) @@cat: /proc/self/attr/current: Invalid argument Welcome to CentOS Starting udev: @[ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) [FAILED] *** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. *** Warning -- SELinux is active *** Disabling security enforcement for system recovery. *** Run 'setenforce 1' to reenable. /etc/rc.d/rc.sysinit: line 53: /selinux/enforce: Read-only file system Give root password for maintenance (or type Control-D to continue): bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system (Repair filesystem) 1 # setenforce 1 setenforce: SELinux is disabled (Repair filesystem) 2 # echo 1 (Repair filesystem) 4 # /etc/init.d/sshd status openssh-daemon is stopped (Repair filesystem) 5 # /etc/init.d/sshd start Starting sshd: NET: Registered protocol family 10 lo: Disabled Privacy Extensions touch: cannot touch `/var/lock/subsys/sshd': Read-only file system (Repair filesystem) 6 # sudo /etc/init.d/sshd start sudo: sorry, you must have a tty to run sudo (Repair filesystem) 7 # I have 4 site in production and I need to restart the server quickly (SSH + HTTPD ,...). Thank you for your time.

    Read the article

  • Outlook refuses to connect to Exchange

    - by wfaulk
    Outlook 2007 under Windows XP connecting to Exchange 2003 SP2: when started, it flips back and forth between "Connecting to Exchange Server" and "Disconnected" three or four times, then gives up and stays disconnected. I tried deleting the ost file (which was nearly 2GB), turning Cached mode on and off, recreating the account inside the Mail control panel, changing the account to use HTTP, and probably some other things. None of it seemed to make any difference, until … After fiddling with it for a while, I got this absurd error message dialog at startup, and it exits after I click OK: Cannot start Microsoft Office Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. (I'm not sure if I can even trust that message. It's so long, it just feels like a random offset into Outlook's stack of error messages.) Either way, the Exchange server is available to everyone else, and is available via OWA from that computer. I ran Process Explorer against Outlook and it showed 5 or so ESTABLISHED connections to our Exchange server, plus listening on two UDP ports, and two CLOSE_WAIT connections to localhost. If I managed to look at Outlook's IP connections while it was doing its Connecting/Disconnected dance, it had a huge number of connections open to the Exchange server. It more than filled ProcExp's dialog box; I'm guessing at least 20, probably more. The only other odd thing is that our network admin at some point added a wildcard DNS record to the domain name that we use for email, and now Outlook will sometimes (always?) start by complaining about autodiscover.example.com's SSL certificate. There is a web server there, but it doesn't have any sort of email autodiscover anything on it. It doesn't make any difference if I click "OK" or "Cancel" (or whatever the buttons are). I also added a bogus entry for the hostname to Windows' hosts file, pointing it at 127.0.0.2, and it stopped complaining about the certificate. (The CLOSE_WAIT sockets above were from before I made this change, and went away after.) I don't think this is related, as the same problem should exist for everyone, but it might be. This is the second time this user has had this problem. The first time, I never found a solution other than reinstalling Outlook. Now that it's a pattern, I'd like to find a permanent solution, rather than assume it's a random glitch.

    Read the article

  • Legacy VB6 application under Win7 SQL error

    - by Shial
    We have a rather unfortunate legacy application at work, written originally in VB6 it predates anybody in our IT department by at least 5 years. We have a contracted developer for ongoing maintenance and where he can he rewrites sections over into .NET code (Not sure about his techniques here, this is a side job for his regular work as an IBM engineer) the application works fine (such as it is) under windows XP. We have only a couple of Windows 7 machines mainly for testing and this application seems to run into a wall. Things like the background not loading and SQL errors. This is even running under administrator. Running an SQL trace from the ODBC control panel shows several interesting things. It makes a connection to the database successfully initially where it runs a query to determine if it is running the correct version. This query works fine. 558-1af0 ENTER SQLExecDirectW HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 BMS 558-1af0 EXIT SQLExecDirectW with return code 1 (SQL_SUCCESS_WITH_INFO) HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 It then seems to drop its connection and can't find the ODBC connection despite the fact its connecting to the same DB. From the trace it looks like it configures the connection then it starts firing off SQLFreeStmt to unbind and close out then when in the application and it tries to do its thing there is no connection. 558-1af0 ENTER SQLFreeStmt HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> BMS 558-1af0 EXIT SQLFreeStmt with return code 0 (SQL_SUCCESS) HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> Then this happens when I try to do something that pulls data 558-1af0 ENTER SQLDriverConnectW HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> BMS 558-1af0 EXIT SQLDriverConnectW with return code -1 (SQL_ERROR) HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> DIAG [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) Nearly all of my searching on this issue comes up with programming issues where the connection string has a problem. The only thing that is different in this particular scenario though is Windows 7, I know the connection string is fine since it works on the XP machines. The VB components are supposed to be still functional under Win7. My computer is running 32 bit win7 and my VP is running Win7 64 bit and both have the same problem so that can be ruled out. I have already tried reinstalling the SQL Native Client and the VB runtime as well as the application in question. Hopefully I can find a solution and not have to resort to using the XP VM.

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Corosync :: Restarting some resources after Lan connectivity issue

    - by moebius_eye
    I am currently looking into corosync to build a two-node cluster. So, I've got it working fine, and it does what I want to do, which is: Lost connectivity between the two nodes gives the first node '10node' both Failover Wan IPs. (aka resources WanCluster100 and WanCluster101 ) '11node' does nothing. He "thinks" he still has his Failover Wan IP. (aka WanCluster101) But it doesn't do this: '11node' should restart the WanCluster101 resource when the connectivity with the other node is back. This is to prevent a condition where node10 simply dies (and thus does not get 11node's Failover Wan IP), resulting in a situation where none of the nodes have 10node's failover IP because 10node is down 11node has "given back" his failover Wan IP. Here's the current configuration I'm working on. node 10sch \ attributes standby="off" node 11sch \ attributes standby="off" primitive LanCluster100 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.100" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive LanCluster101 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.101" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive Ping100 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive Ping101 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive WanCluster100 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.100" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive WanCluster101 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.101" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive Website0 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf" options="-DSSL" \ operations $id="Website-one" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" primitive Website1 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf.1" options="-DSSL" \ operations $id="Website-two" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" group All100 WanCluster100 LanCluster100 group All101 WanCluster101 LanCluster101 location AlwaysPing100WithNode10 Ping100 \ rule $id="AlWaysPing100WithNode10-rule" inf: #uname eq 10sch location AlwaysPing101WithNode11 Ping101 \ rule $id="AlWaysPing101WithNode11-rule" inf: #uname eq 11sch location NeverLan100WithNode11 LanCluster100 \ rule $id="RAND1083308" -inf: #uname eq 11sch location NeverPing100WithNode11 Ping100 \ rule $id="NeverPing100WithNode11-rule" -inf: #uname eq 11sch location NeverPing101WithNode10 Ping101 \ rule $id="NeverPing101WithNode10-rule" -inf: #uname eq 10sch location Website0NeedsConnectivity Website0 \ rule $id="Website0NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 location Website1NeedsConnectivity Website1 \ rule $id="Website1NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 colocation Never -inf: LanCluster101 LanCluster100 colocation Never2 -inf: WanCluster100 LanCluster101 colocation NeverBothWebsitesTogether -inf: Website0 Website1 property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1408954702" \ maintenance-mode="false" rsc_defaults $id="rsc-options" \ resource-stickiness="100" \ migration-threshold="3" I also have a less important question concerning this line: colocation NeverBothLans -inf: LanCluster101 LanCluster100 How do I tell it that this collocation only applies to '11node'.

    Read the article

  • What part of SMF is likely broken by a hard power down?

    - by David Mackintosh
    At one of my customer sites, the local guy shut down their local Solaris 10 x86 server, pulled the power inputs, moved it, and now it won’t start properly. It boots and then presents a prompt which lets you log in. This appears to be single user milestone (or equivalent). Digging into it, I think that SMF isn’t permitting the system to go multi-user. SMF was generating a ton of errors on autofs, after some fooling with it I got it to generate errors on inetd and nfs/client instead. This all tells me that the problem is in some SMF state file or database that needs to be fixed/deleted/recreated or something, but I don’t know what the actual issue is. By “generate errors”, I mean that every second I get a message on the console saying “Method or service exit timed out. Killing contract <#.” This makes interacting with the computer difficult. Running svcs –xv shows the service as “enabled”, in state “disabled”, reason “Start method is running”. Fooling with svcadm on the service does nothing, except confirm that the service is not in a Maintenance state. Logs in /lib/svc/log/$SERVICE just tell you that this loop has been happening once per second. Logs in /etc/svc/volatile/$SERVICE confirm that at boot the service is attempted to start, and immediately stopped, no further entries. Note that system-log isn’t starting because system-log depends on autofs so I have no syslog or dmesg. Googling all these terms ends up telling me how to debug/fix either autofs or nfs/client or inetd or rpc/gss (which was the dependency that SMF was using as an excuse to prevent nfs/client from “starting”, it was claiming that rpc/gss was “undefined” which is incorrect since this all used to work. I re-enabled it with inetadm, but inetd still won’t start properly). But I think that the problem is SMF in general, not the individual services. Doing a restore_repository to the “manifest_import” does nothing to improve, or even detectibly change, the situation. I didn’t use a boot backup because the last boot(s) were not useful. I have told the customer that since the valuable data directories are on a separate file system (which fsck’s as clean so it is intact) we could just re-install solaris 10 on the / partition. But that seems like an awfully windows-like solution to inflict on this problem. So. Any ideas what piece is broken and how I might fix it?

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • Looking for personal scheduling software / todo list with rather particular requirements

    - by Cthulhu
    I've been scouring the web for a couple of (my boss') hours, looking for a piece of software that can organize my tasks in two ways. First, I have a list of bullet points / todo items I can do at any given time. Think of stuff like solve issue X, ask X about Y, write documentation about Z, etcetera. Second, I have a number of running projects I'd like to organize better, as in schedule for a certain part of a day of the week. Ideally (I think), my day would be organized as 50% spent on projects and 50% on the other small things. Now, I don't like most calendar applications (such as Outlook & friends), their UI is too 'official', not really easy to move stuff around (in my experience). I don't like most todo lists either, too static and things. I like new, fast and hip software. I've looked at GTD versions of Tiddlywiki, and I like mGSD for one particular feature. You can make lists of tasks and basically give them one of three statusses - Now (nothing required, you can do it right away), Waiting (you need someone or something before you can work on this), or the most gratifying of all, Done. I like that feature because it's a simple todo list, but indicates more accurately the things you can do right now and the things you depend on someone else for to do. Anyways, that's just a small aspect of that program - most of the other things in there I can't find a particularly good use for. If there's something like that (maybe something that works even snappier, cleaner UI), combined with an easy to use bit of scheduling software (optionally separated into two applications, but preferrably not), I think I'd like that. (Besides something like that, I also use several instances of Trac to monitor tasks and bugs and things for the various clients and projects I have to serve, and TaskCoach to monitor the amount of time I spend on each task / each client. An easy / low-maintenance time tracking software would be neat too) Of course, the software has to be free to use. I don't like shareware, trials, limited software and the like. I could develop my own too, but I'm lazy like that and there's a dozen other projects I'd like to do in my free time (neither of which I actually do). Edit: I like David Seah's printable CEO stuff, if something like that (with some video game / instant achievement / gratification) exists in software, it'd be awesome.

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • How to setup Proxy Cache with Nginx and Passenger

    - by tiny
    I use Nginx and Passenger for my rails application. I want to use proxy cache to cache my pages. However, every request go direct to my rails application. I don't know what wrong with my configuration. Below is my configuration: user www-data; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15; passenger_ruby /usr/bin/ruby1.8; passenger_max_pool_size 6; passenger_max_instances_per_app 1; passenger_pool_idle_time 0; rails_spawn_method conservative; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 512; sendfile on; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css text/javascript application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss; proxy_cache_path /var/www/cache/webapp levels=1:2 keys_zone=webapp:8m max_size=1000m inactive=600m; include vhosts/*.conf; include /opt/nginx/conf/sites-enabled/*; root /var/www; } server { listen 127.0.0.1:3008; server_name localhost; root /var/www/yoolk_web_app/public; # <--- be sure to point to 'public'! passenger_enabled on; rails_env development; passenger_use_global_queue on; } server { listen 80; server_name webpage.dev; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; error_page 503 http://$host/maintenance.html; location ~* (css|js|png|jpe?g|gif|ico)$ { root /var/www/web_app/public; expires max; } location / { proxy_pass http://127.0.0.1:3008/; proxy_cache webapp; proxy_cache_valid 200 10m; } #More Location }

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >