Search Results

Search found 23664 results on 947 pages for 'quirky update'.

Page 723/947 | < Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

  • How to reset the postgres super user password on mac os x

    - by Andrew Barinov
    I installed postgres on my mac running 10.6.8 and I would like to reset the password for the postgres user (I believe this is the super user password) and then restart it. All the directions I found do not work because I think my user name is not recognized by pg as having authority to change the password. (I am on the admin account of my mac) Here is what I tried: Larson-2:~ larson$ psql -U postgres Password for user postgres: psql (9.0.4, server 9.1.2) WARNING: psql version 9.0, server version 9.1. Some psql features might not work. Type "help" for help. postgres=# ALTER USER postgres with password 'mypassword' postgres-# \q and for restart I did: Larson-2:~ larson$ su postgres -c 'pg_ctl -D /opt/local/var/db/postgresql84/defaultdb/ restart > Which didn't work, as the password remained the same as it was before. Can someone provide directions for doing this and for making sure it's recognized by PG? Update I went ahead and edited the pg_hba.conf file located in /Library/PostgreSQL/9.1/data and set the settings as follows: # TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only local all all trust # IPv4 local connections: host all all 127.0.0.1/32 trust # IPv6 local connections: host all all ::1/128 trust However, like before, the password stayed the same after I changed it. I am not sure what further steps I can take from here.

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

  • Network driver for Hyper-V restore from Windows Home Server

    - by Philipp Schmid
    I have backed up Windows Server 2008 running virtualized on Hyper-V to a Windows Home Server 2008 SP1 (I know I should have backed up the VHD instead). Now I need to restore the contents of the VM from WHS. I have created a restore CD ISO and used it to create a new VM. It all works as advertised up to the point where the restore process wants to load the network drivers (it only finds 4 disk drivers on the restore CD. but no network drivers). So I created a virtual floppy and copied the contents of 'Home Server Drivers for Restore onto it. But no luck! I have tried moving the 4 subdirectories into the root of the floppy, but that didn't work either. Finally, I started another instance of the WS 2008 to identify the network driver that the virtualized instance is using (%WINDOWS%\system32\drivers\netvsc60.sys) and copied that file onto the virtual floppy, without success. Does anyone have any suggestions on how to get networking working on a Hyper-V instance running off the Windows Home Server Restore CD? UPDATE: As suggested by delenda, I have added a legacy network adapter to my VM, and indeed I now get a network driver listed! However, the WHS it still not found, even after entering the home server name manually. PHS

    Read the article

  • Subversion Edge LDAP (require CAC Certificate not Username and Password)

    - by Frank Hale
    What I've Done: I've successfully installed and configured Subversion Edge 3.1.2 with LDAP support on a Windows 2008 server. I have configured LDAP users and am able to use LDAP credentials to work on repositories just fine. No issues whatsoever. Works great! What I Want To Do: I've been searching for several hours now in hopes to find some information on how to configure Subversion Edge server to require client certificates for user authentication against an LDAP environment. I have not found anything yet that gives me an indication of how to do it. I know there are SVN clients that are capable of prompting for CAC certificates but I cannot figure out how to set my server up to require it. NOTE: CAC authentication is already setup and working in the windows environment. Desired Outcome: When running svn commands that require authentication against my Subversion Edge Server I want it to prompt me for my CAC certificate instead of my Active Directory username and password. If anyone has any information on this I'd greatly appreciate it. EDIT: I'm still digging so if I find out anything I'll update this question with what I found.

    Read the article

  • Mac Mini's internet very slow, every other device fine (PC, iPhone, Xbox 360)

    - by alex
    I recently haven't used my Mac Mini for about 5 days (however it was left on). I seem to be able to connect and get great download / upload speeds through my PC, Xbox 360, iPhone and parents' laptop. However, my Mac Mini is very slow. OS X's Mail.app is downloading mail at 0.4kbps and then dropping to 0. Skype file transfers are doing the same. Browsing the net is a terrible experience. It is taking 30 seconds or more to download basic pages. All of my devices connect wirelessly to a Netgear router / modem. I have tried giving the Mac Mini a manual IP, and renew DHCP lease, as well as flush DNS in Terminal. I have also rebooted the router / modem twice, and the Mac Mini twice. Do you know what could be causing this? Thanks Update This is very weird. It is also very slow accessing localhost (setup through MAMP) and also slow to access the Netgear router config pages.

    Read the article

  • How do I make ESXi 5.0 to shutdown virtual machines when the physical power button is pushed?

    - by Pawel Sawicki
    I have a home NAS/DLNA server built out of an HP Micro Server with the HP branded VMware ESXi 5.0.0 build-623860 (free license) installed. Being a home media center I'd like it to be "manageable" by all my household members. This requires that it needs to be powered on an off (including all the VMs inside) by anybody with the physical access to the server by simply pressing the power button on the chassis. The "startup" part is easy to obtain - all I had to do was to configure the startup/shutdown policy: Once the server powers up, all VMs start as well and that's exactly what I need. Well.. it did work up until 5.0.0U1, but that's a different story: http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html Unfortunately, pressing the power button doesn't gracefully shutdown the guest machines - they are terminated instead. If I run the "shut down" command from the vSphere Client interface guests are powered off. I'd like to get the same end result when the physical power button is switched. I've poked around a bit on the ESXi server. There's a "/sbin/shutdown.sh" script that seemed to do exactly what I need... but after trying it does exactly what the power off button. The "/etc/inittab" contains an entry for the "shutdown" level but I suppose it's not hooked to the power button. I can't find any acpi related configuration, neither do I know what exactly is executed when the power button is pressed. Does anybody have a clue how can I make the VMs shutdown automatically when the physical power switch is pressed to turn of the computer?

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Video output vanishes and it isn't the video card... suggestions?

    - by Ira Baxter
    I have a high-end dual processor Dell workstation that operated for several years without problem. Recently, the video output has gone flaky, in the sense that after 2-3 days, it loses synch with the monitor (sometimes you can still see a hashed-mess of the what you expect to see on the screen). Poking at the keyboard, and accessing the file system on the machine from over the network, indicates that the machine is running fine and its just the video. A reboot fixes the problem... for another day or two. This in effect makes the machine unusable. So, I replaced the video card with an exact duplicate bought from e-bay. (I checked after the dupe arrived to be sure, yep, same model number). I still get the same behavior. So unless I believe that both video cards are broken the same way, I have to beleive this is a problem with the motherboard/power supply, neither of which I am inclined to replace. The only other possibility I can think of is a Windows update to the graphics driver. How would I check for this? Anybody had a similar problem? Otherwise we're junking what used to be a perfectly good machine. (Another couple of hours of more wasted effort and that's it in terms of economics anyway).

    Read the article

  • passing on command line options in bash

    - by bryan
    I have a question. I have a script, a kind of long script written in bash aprox. 370 lines. That has several functions and in those functions the user has to enter information which is then stored in files. ( This is suppose to represent a MySQL database, with the functions INSERT, UPDATE, SELECT, SELECT where x=y.) I created this myself in bash, now the only thing that rests me is that I need to be able to pass arguments on the command line to the script, that will do the same as the script does. I know that bash has positional parameters such as $1 $2 $3 $* $@ $0 ( refers to the name of the script) etc. I know how I can use these parameters in a simple if function. This is not enough for my script. I basicly need to do the same thing that the script does, but then from the command line. I have been struggling with this for a couple of days now and I cannot think of a way to get it to work. Maybe someone here can help me with this? If you want to have the script. That can be possible, but I don't think I can paste it in here... EDIT: Link to script, http://pastebin.com/Hd5VsDv2 Note, I am a beginner in bash scripting. EDIT: This is in reply to Answer 1. As I said I hope I can just replace the if [ "$1" = "one" ] ; then echo "found one" to if [ "$1" = "one" ] ; then echo SELECT where SELECT is the function I previously had in my script(above) http://pastebin.com/VFMkBL6g LINK to testing script

    Read the article

  • OpenVPN, install a TAP adapter

    - by GolezTrol
    When I try to connect to my work VPN using OpenVPN, the connection fails with the message: All TAP-Win32 adapters on this system are currently in use. Many sources suggest to look in Control Panel\Network and Internet\Network Connections an enable the TAP adapter, but when I look there, there is none. Now I've run addtap.bat which is provided with OpenVPN, but I still don't get to see any TAP adapter, and logging in in VPN still fails. The output of addtap.bat is C:\Windows\system32>"C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe" install "C:\Program Files (x86)\OpenVPN\driver\OemWin2k.inf" tap0801 Device node created. Install is complete when drivers are updated... Updating drivers for tap0801 from C:\Program Files (x86)\OpenVPN\driver\OemWin2k .inf. Drivers updated successfully. I've Run As Administrator both the setup of OpenVPN and addtap.bat. I've run deltapall.bat to remove any (maybe hidden) adapters. It said it removed three of them, after which I ran addtap.bat again to try to create another one. I also run OpenVPN itself as administrator. What's wrong? Running Windows 7 Home Premium on a HP Pavilion dv7 4050ed. It has worked before, but I recently had to reinstall my laptop, for which I used the restore disks I created when I just got it. Everything else seems to work fine. == UPDATE == The TAP adapter is found in Device Manager, but apparently it is disabled because it is incompatible with Windows 7 64bit. I've deïnstalled OpenVPNGui, downloaded a version that should be 64bit compatible, and installed that. Still no cigar. Then I found a tip to install OpenVPN (version 9) after installing OpenVPNGui, because that installs OpenVPN version 8. Now I got a v9 TAP driver in Device Manager, but it still doesn't work and shows up in device manager with an exclamation mark, and not at all in my network devices.

    Read the article

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • Unix: Files starting with a dash, -

    - by Svish
    Ok, I have a bunch of files starting with a dash, -. Which is not so good... and I want to rename them. In my particular case I would just like to put a character in front of them. I found the following line that should work, but because of it dash it doesn't: for file in -N*.ext; do mv $file x$file; done If I put an echo in front of the mv I get a bunch of mv -N1.ext x-f1.ext mv -N2.ext x-f2.ext Which is correct, except of course it will think the first filename is options. So when I remove the echo and run it I just get a bunch of mv: illegal option -- N I have tried to change it to for file in -N*.ext; do mv "$file" "x$file"; done but the quotes are just ignored it seems. Tried to use single quotes, but then the variable wasn't expanded... What do I do here? Update: I have now also tried to quote the quotes. Like this: for file in -N*.ext; do mv '"'$file'"' '"'x$file'"'; done And when I echo that, it looks correct, but when I actually run it I just get mv: rename "-N1.ext" to "x-n1.ext":: No such file or directory I have just no clue how to do this now... sigh

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Silent install FirePro v4900 Driver on Windows Embedded 7 Standard

    - by Birgit_B
    I'm trying to install the Drivers for a FirePro v4900 on a Windows Embedded 7 Standard 64bit OS. I want the system to be as small as possible, so i would rather not install the whole catalyst control center, but only the necessary drivers. Because the installation should be accomplished absolutely unattended, the installation process of the FirePro-Driver should also be done without any user interaction. I see two possible solutions for the Problem: Install only the Drivers: Is it possible to solely install the necessary drivers? How would i achieve that? This solution would be the preferred one, because of the smaller footprint. Silent custom install the provided "FirePro_8.911.3.3_VistaWin7_X32X64_135673.exe" (found at ATI FirePro™ Driver). Is there a way, to do that? Thank you in advance for your support! Update: I managed to accomplish a silent installation. I extracted the contents of the above mention installer-file and ran \$_OUTDIR\Bin64\Setup.exe -Install. (There are some other Parameters, just run Setup.exe /?). But i couldn't achieve to just install the drivers without the Cataclyst Control Center, and it seams the Control Center has some unfulfilled dependencies and so it crashes...

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

  • Sluggish Windows SBS 2003

    - by TomWilsonFL
    One of my customers has a Windows 2003 Small Business Server which at this point is basically the DC, DNS, Fileserver and Symantec Protection Manager. I have disabled Exchange because I moved their mail to Google Apps. The server is extremely sluggish when doing anything. It is most noticeable when a dialog box is open (say the System properties), and you try to change tabs. This is usually instant, but on this machine can take 3-5 seconds. What additional services / packages can I uninstall from this machine knowing that it is only performing the above roles? Will removing the "Small Business Server" package in Add / Remove Programs get rid of a few unnecessary things? Any other thoughts? P.S. I know Symantec Endpoint and the Protection Manager are hogs, but I have nothing to replace the solution with at the moment. Thanks, Tom UPDATE: I looked over the different performance metrics, but nothing stood out as a problem. One of my friends mentioned Symantec's log and temp files can get quite huge and slow things down, so I ran CCleaner on the machine and found close to 3 GB of Symantec "stuff." Removed that and now the machine is MUCH better. I am still unsure why the data just sitting there would cause such a slowdown. The drive is not even near full. The only thing I can imagine is that Symantec must have to run through this stuff now and then.

    Read the article

  • Windows XP error message: "Windows cannot find 'explorer.exe'"

    - by Meysam
    In Windows XP I can open "My Computer" and see all the hard drives. I can also see the explorer.exe process running among other processes in Task Manager. But after opening "My Computer", when I double click on one of the drives to open it, I get the following error message: Windows cannot find 'explorer.exe'. Make sure you typed the name correctly, and then try again. To search for a file, click the start button, and then click search. Although I could detect and remove several suspicious files using Malwarebytes & Microsoft Security Essentials, the problem still remains. The interesting point is that if I right click on one folder and select Open or Explore from the menu bar, I can open the folder! but if I double click on the folder, it does not open and I get the above error message. How can I fix this problem? Any advice would be appreciated! Update: I formatted the C: drive (NTFS), a deep format, and installed a fresh Windows XP on it. I am not getting this error when I double click on C drive icon anymore. But the same error appears when I double click on other drive names. Maybe I should format them too!

    Read the article

  • Passing two arguments to a command using pipes

    - by firebat
    Usually, we only need to pass one argument: echo abc | cat echo abc | cat some_file - echo abc | cat - some_file Is there a way to pass two arguments? Something like {echo abc , echo xyz} | cat cat `echo abc` `echo xyz` I could just store both results in a file first echo abc > file1 echo xyz > file2 cat file1 file2 But then I might accidentally overwrite a file, which is not ok. This is going into a non-interactive script. Basically, I need a way to pass the results of two arbitrary commands to cat without writing to a file. UPDATE: Sorry, the example masks the problem. While { echo abc ; echo xyz ; } | cat does seem to work, the output is due to the echos, not the cat. A better example would be { cut -f2 -d, file1; cut -f1 -d, file2; } | paste -d, which does not work as expected. With file1: a,b c,d file2: 1,2 3,4 Expected output is: b,1 d,3 RESOLVED: Use process substitution: cat <(command1) <(command2) Alternatively, make named pipes using mkfifo: mkfifo temp1 mkfifo temp2 command1 > temp1 & command2 > temp2 & cat temp1 temp2 Less elegant and more verbose, but works fine, as long as you make sure temp1 and temp2 don't exist before hand.

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Command prompt hangs/freezes/crashes sporadically

    - by Leonard Challis
    I'm finding it very difficult to Google this, I don't seem to be able to find anyone with the same issue and I don't know enough about the Windows operating system to troubleshoot. The machine(s) we are seeing the problem on are Windows 7 (professional) both 64bit and 32bit. The problem is with the command prompt freezing up, seemingly randomly. When it does freeze nothing will bring it back to life (i.e. keypress) and it's nothing to do with Quick Insert mode either. It doesn't seem to be when I run standard commands, such as cd, dir, etc, but when I run different programs from the command line. The annoying thing is that sometimes the prompt will freeze and at other times it won't, using the same program/command in the prompt. To add to the frustration, one of my colleagues who had the same problem seems to not have experienced it for a few days now (we're pretty heavy on the command line). It's not a VPN/RDP thing as suggested in other questions and forum posts, as I've seen this both locally and remotely. I thought it was to do with the return code signifying an error or some error state in the program, i.e.: C:\Users\leonardc>mysql -u lalala ERROR 1045 (28000): Access denied for user 'lalala'@'localhost' (using password: NO) but this isn't always the case either. In fact the above command hasn't crashed the shell before. Elevating the prompt to run as Administrator doesn't seem to have any bearing on the problem either. Disabling my anti-virus doesn't have an effect either. Update: I tried the same commands in PowerShell, but I still get the same problem, it will freeze at random times (more often than not, as with the command prompt, but not always). It's not the same as command prompt in the fact that one might work while the other doesn't, but then the next time I try run the same command in both it will suddenly be different again.

    Read the article

< Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >