Search Results

Search found 7085 results on 284 pages for 'termine updates'.

Page 221/284 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Upgrade of Ubuntu 8.10 distribution fails due to missing packages

    - by Tim
    I have a server that I've forgotten to upgrade for ages, which is still running Intrepid (8.10). I'd like to upgrade it to a newer version of the distribution, so that I can get security patches etc. I found some instructions that tell me to install the package update-manager-core. I tried the following: $ sudo apt-get install update-manager-core but this fails since some of the necessary packages can't be found: ... Err http://archive.ubuntu.com intrepid/main python-apt 0.7.7.1ubuntu4 404 Not Found [IP: 91.189.88.40 80] Err http://archive.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-apt/python-apt_0.7.7.1ubuntu4_amd64.deb 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found [IP: 91.189.88.40 80] ... I know that Intrepid is no longer supported, and so I guess some of the necessary files may no longer be maintained. But this seems rather unhelpful: I can't upgrade because it's too old, and the only way to fix this would be to upgrade it. Is there a way round this? Is something else wrong?

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • Timeperiod Setting

    - by Alvin G. Matunog
    I am running Nagios 3.2.3 on CentOS 5.7 32bit and I have a bit of a problem scheduling timeperiods. Please help me. Objective, to stop monitoring during server restart (since the server will be restarted and will be restarted automatically because of updates and system backup). The restart is scheduled every 1st Sunday of every month. My monitoring runs 24x7 but during restarts I want to stop the monitoring and resume after 30 mins. So every 1st Sunday I want my schedule to be 00:00-11:30,12:00-24:00. This means that it will stop at 11:30 and resumes on 12:00nn. If I set this on every Sunday there is no problem. But if I set this time on every 1st Sundays, it stops at 11:30 but resumes on the next day (Monday) and not on 12:00nn Sunday. I don't know what I am missing. If I set on regular weekdays there is no problem. But on Offset weekdays (1st Sunday) it doesn`t work the way it should have. Here is my definition; define timeperiod{ name 1st_sunday timeperiod_name 1st_sunday alias No monitoring every 1st Sunday thursday 1 00:00-11:30,12:00-24:00 } define timeperiod{ timeperiod_name irregular alias regular checking use 1st_sunday sunday 08:30-22:00 monday 08:30-22:00 tuesday 08:30-22:00 wednesday 08:30-22:00 thursday 08:30-19:10,19:20-22:00 friday 08:30-22:00 saturday 08:30-22:00 } Can anyone help me? Please? Thank you.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Wi-Fi won't automatically connect (box in Windows configuration gets unchecked)

    - by greg88
    The checkbox that says "Use windows to configure my wireless network settings" keeps getting unchecked (Wirless Network Connection - Change advanced settings - Wireless networks tab.) How do I stop that from happening? (So the wi-fi will reconnect.) When I manually re-check the box, it automatically connects.) I have a D-Link AirPremier DWL-G550 PCI adapter. I installed the newest driver, 5.3.0.46, and that didn't solve the problem. I took the "Atheros Client Utility" (the program window says "D-Link AirPremier Client Utility" when you run it) out of the start menu, rebooted, and that didn't solve the problem. (That utility puts a signal bar similiar to the one MS Windows puts there, and it's gone now.) The D-Link client utility has an option to automatically connect to preferred networks, but it is greyed out. It is also greyed out if I install the driver and utility right off the D-Link installation CD, so the problem isn't that the utility and driver are incompatible versions. I want to use Windows to handle the connection anyway, as the D-Link utility is garbage. Windows XP SP3 w/all current updates.

    Read the article

  • Linux: How do I use Munin in cPanel to monitor MySQL?

    - by Continuation
    I have a cPanel server running CentOS 5.5. I want to use Munin to monitor MySQL. I went to: Main >> cPanel >> Manage Plugins and selected "Install and keep updated" for Munin and clicked "Save". I got the usual bunch of status updates about the install. At the end I got: Going to read '/home/.cpan/sources/modules/02packages.details.txt.gz' Database was generated on Wed, 02 Mar 2011 18:28:33 GMT ..........................................................................DONE Going to read '/home/.cpan/sources/modules/03modlist.data.gz' Out of memory! Callback called exit. Done Done Done Process Complete As you can see I got an "Out of memory!" message. But after that it said "Process Complete". Was Munin installed? When I went back to "Manage Plugins" it Munin has a check that against "Install and keep updated". So is everything alright? And how do I use Munin now? How do i configure it to monitor MySQL? Where can I see the results? Thanks.

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Windows XP Freezes

    - by Jim Fell
    Hello. I'm running a machine with Windows XP Professional 64-bit. Every so often, it will freeze for no apparent reason. That is, everything stops responding, except the mouse. I can move the mouse around, but I can't click on anything. Keyboard input is also not accepted/received when this problem occurs. The three-finger salute fails to bring up the Task Manager. Even pressing the power button on my computer fails to shut it down. The only way out of this that I have found is to hard-reboot the machine (i.e. pull power or hold power button in for 10 seconds). This problem was occurring on the system when it had all its updates and after a fresh install when not everthing was quite yet updated. I've run the Scandisk utility and the latest version Memtest86 that supports 64-bit architecture; neither found any errors. The last time this happened was on a fresh install of Windows. Only Nero Essentials, Avast antivirus (disabled), Firefox, and Spybot were installed. I was not running Nero, Firefox, or Spybot at the time, and Avast was disabled, so I'm pretty certain this is a Windows issue. Is anybody familiar with this problem or have any pointers? Thanks.

    Read the article

  • Windows 7, enabeling IE for removal

    - by Thew
    For a while now, Windows Update has been prompting me to update IE9 to IE10. And every time I turn off my PC, It starts to install that one update wich allways fails. Why does it fail updating IE9? I don't know. So the solution I had in my mind was to remove IE9 and then install IE10 manually. THe problem is now, I can't find the uninstall file for IE9. I have looked in windows control panel / programs and features but it didn't show up, not even in the installed updates section. So after I clicked on Turn windows features on or off, I saw that the IE9 box wasn't checked. So I checked it, clicked OK, but then when I clicked on the Turn windows features on or off link again, it wasn't checked anymore. It's almost like Windows doesn't want me to uninstall IE9. but I have to in order to normally shut down my pc, because everytime I shut my PC off, it tries to install that stupid update again! What can I do to solve this?

    Read the article

  • How to tell Linux to explicitly swap out main memory of suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • How to use non-free drivers during debian install

    - by blokeley
    I'm trying to install debian stable using unetbootin. The install process fails with "network autoconfiguration failed", probably due to the ethernet driver not working. My Lenovo U350 has a Broadcom BCM57780 which does not seem to be supported out-of-the-box: there are various bug reports here, here and here, but I don't know if the fix has made it into debian (6) stable. One discussion says that you have to use an ethernet driver from the firmware-linux-nonfree package. I'm not sure that this is correct because the BCM57780 is not in the list of drivers in firmware-linux-nonfree. The specific question tree is: Is BCM57780 supported in debian stable? If so, what could be wrong? Should I install debian unstable instead? If not, do I need to use firmware-linux-nonfree during installation and, if so, how do I do this? Please note: I've used ubuntu and debian loads in the past but please post line-by-line guidance rather than some cryptic abbreviation of any instructions. Thanks in advance for any help. Updates: Debian stable with non-free drivers did not work. Debian unstable (free drivers only) did not work. Tried loading firmware-iwlwifi_0.28_all.deb from another USB stick to get wireless working rather than BCM57780. The .deb file was found but the network configuration still failed! That's it, I'm giving up. Unfortunately I'll use ubuntu even though the Unity user interface will be very unstable for the next couple of years :(

    Read the article

  • Centos 5.5 install PearDB

    - by John Gardeniers
    Disclaimer: I use Linux for some jobs but I am not a Linux admin. I have a Centos 5.4 machine which performs some server duties and doubles as a web site development machine. PHP 5.3.3 was installed from RPM with the --without-pear option. I now wish to use PearDB but can't figure out how to install it. If I run yum install php-pear-db, it comes back with Error: Missing Dependency: php = 5.1.6-27.el5_5.3 is needed by package php-devel-5.1.6-27.el5_5.3.i386 (updates). The only RPM I've found that looks like it might be close currently has a dead link, so I can't even try that. What would be the best way to go about this? Is there a way to reinstall from the RPM and include pear? Can I install the dependency without breaking the current installation? Should I try to uninstall the original PHP and reinstall it from source, complete with pear? I thought this might have been an SU question but the FAQ over there suggests otherwise.

    Read the article

  • Does OS X support linux-like features?

    - by Xeoncross
    I have been using XP for almost a decade. Contrary to popular belief, it has served me well. In the last 4 years I don't remember ever having it crash on me. It has the most stable GUI I have ever used. However, an OS is only as good as it's GUI AND command line combined. Windows command line is awful and totally useless. So I have been using Ubuntu for a couple years and Debian on my servers. The only problem is that Gnome applications (ubuntu 6-10) constantly crash on me (Ubuntu Studio was the most unstable OS I ever used). I have high quality Gigabyte, MSI, and Asus motherboards and CPU's from old Semprons/Athlons to Celerons/Core 2 Quads. What are the odds that every PC I have ever owned can't remain stable with a linux GUI? Not to mention that Adobe CSx Suite doesn't work on linux. Anyway, I am now looking at moving to a MAC in the hope of finding a stable GUI and a feature-packed command line. Does Mac OS have an integrated command line where I can do linux-like-awesomeness like rsync, ssh, wget, crong jobs, package updates, and git without having an unstable GUI? Basically, until the linux GUI applications get a little better, is OS X what I need?

    Read the article

  • Windows Update broke Wirless driver - Can't get it working again

    - by private_meta
    I naively installed a driver update incoming from Windows Updates yesterday. That caused my Wifi on my Notebook (HP ELitebook 2740p) to break. The network is working as I tried it with my other mobile devices. When installing said driver, it told me that the installation failed. I tried to do a system restore, which did not help getting wifi connectivity back. The device said it was working, but did not get any wifi connection and did not discover any networks. What I tried next was to uninstall the device from my Device Manager and install either the current drivers found through windows, then I tried the same with the current drivers on the HP driver page. both of those attempts failed. It always tells me that installation failed when I uninstall and reinstall drivers. If I try to update the broken driver via internet, it tells me it is up to date, and in the device manager it still views as broken. Next, I 'played' around a bit and reinstalled drivers, and I managed to install drivers from 2012, and when trying to update drivers manually and picking drivers that match the device, it lets me choose between drivers from 2010 and 2014. 2014 would be the current one, picking the 2010 driver leads to the driver being accepted. However, no wireless network can be found. I'm pretty much out of ideas by now, short of reinstalling. The only option I can still think of was that the device broke at the exact moment I installed the new windows driver, but I find that a bit unlikely. Any help would be appreciated fixing this issue.

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • Summer daylight time not changing on some active directory domain clients.

    - by Nick Gorbikoff
    We just had a summer daylight change in US. and pc's on my network are behaving strange, some of them change time and some didn't. My network: 2 locations both in Midwest, same time zone. Location 1: 120 pcs (windows xp & windows 200) , with 1 Active Direcotry Domain Controller on Windows 2003 Standard. A couple of windows 2000 servers (they up to date) the rest of the servers are Xen or Debian machines (all up to date) , Second location connected through OpenVPN link all pc's are running fine - but they are all connecting to our AD domain controller. Locaiton 2: 10 pcs, and a shared LAN NAS. Both of the routers/firewalls in both locations are pFsense boxes with ntp service running - but it's up to date. Tried all the usual suspects: I have all the latest updates installed restarted them domain controller is running fine most computers are running fine I have only one domain controller on my network also my firewall serves as ntp server (pfsense) but it's up to date. all of the linux machines are fine since they are querying firewall / router for the time. about 1/3 of my pcs are 1 hour behind. If I change them manually they just change back ( the way domain pc's are supposed to). I've tried everything but I can't think of anything else to try.

    Read the article

  • How do I prevent lighttpd from caching static files, even when modified on disk?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • PHP mail() function stopped working on Windows 2008 R2 IIS 7.5. Why?

    - by Karl
    PHP 5.3.13 and as noted IIS 7.5. PHP mail() was working fine until I did 3 things (at the same time). (a) added memory to the server taking it from 4gb to 5gb; (b) ran Windows Update and applied all available updates; (c) removed SQL server installation. Windows 2008 R2 SMTP server still works fine. I know this because I can drop a file in the pickup folder and the mail is delivered. This PHP test script: <?php $to='my_name@another_domain.com'; $subject='Test email using PHP'; $message='This is a test email message'. "\r\n"; $headers='From:[email protected]' . "\r\n" . 'Reply-To:[email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers, '[email protected]'); ?> creates this entry in the PHP log file: mail() on [C:\www\pgs.com\store\admin\test_php_mail.php:1]: To: my_name@another_domain.com -- Headers: From:[email protected] Reply-To:[email protected] X-Mailer: PHP/5.3.13 PHP's mail.log. When using PHP now, I never see a file dropping on the IIS pickup folder. And on other thing, when using previouly working features on the site (such as password recovery), there is no entry made in the mail.log. (The mail log has just been setup to help solve this problem.) How do I fix this? Or at least how do I diagnose the problem? Thanks.

    Read the article

  • Cannot connect to a shared network drive

    - by dublintech
    I am using windows 7, I cannot connect to a shared network drive on another machine. I can ping the machine. I can remote desktop connect to the machine. The machine is on the same subnet My friend with the exact same laptop as me (and on the same network, same workgroup) can connect to the shared folder. The machine I am trying to connect to and my friends machine can both see shared folders on my machine. I also cannot see shared folders on the friends laptop. When I select diagnose, windows tells me nothing useful. When I select see details on the error pop up, I see: Error code: 0x80004005 (google doesn't help much) I can nbtstat -a the machine who has the shared folder. When I try with my firewall turned off the same happens. I have ensured my windows 7 has all updates. I run security essentials to ensure my laptop is clean. I run ccleaner to clean up my registry. Same error. I have tried with my laptop on both wireless and ethernet. As you can imagine, I am banging my head against the wall on this one.

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Windows 7 clean install becomes corrupt after reboot (repeated many fresh installs)

    - by pjotr_dolphin
    My laptop keeps crashing on boot after clean Windows 7 install. Ok, here is the story, and some fact. Computer: Samsung NP900X3C-A04HK (256GB SSD, 8GB RAM) OS to install: Windows 7 Ultimate SP1 (not from Samsung, own fresh Win) I purchased this laptop about a year ago, never booted it into the Windows Home that was installed on it, installed directly Ubuntu on the machine. Full disc encryption was the selected install, so of course it wiped the complete disc (including Samsung Recovery Partition). After some time, I felt like going back to Windows, as Windows 7 is actually quite nice. So I went to buy a fresh Windows 7Ultimate with SP1. Now to the tricky part. Windows installs perfectly, and after installing all Windows updates, drivers from Samsung, software I need, it is time for shutting it down and go to bed. Starting it up again, and it is not booting, these are the type of errors I have gotten so far (fresh installed it more then a dozen times now, and tried different suggestions from threads on the net). Windows failed to start... Status: 0xc000000f Info: The boot selection failed because a required device is inaccessible. File: /boot/bcd Status: 0xc000000f Info: an error occurred while attempting to read the boot configuration data. And some other errors, not all the same. Not memory of this. I have run different disc checks, and all says my SSD is in perfect shape. Note: Soft reboots from Windows menu works, never gets corrupted. But if I Shutdown and then start it up again, this is when it happens. Can someone help me not get back to Ubunut? What can be the cause, and how can it be fixed so I do not get there problems again?

    Read the article

  • mod_rewrite and Apache questions

    - by John
    We have an interesting situation in relation to some help desk software that we are trying to setup. This is a web based software application that allows customers and staff to log into it and access tickets and supply updates, etc. The challenge we are having deals with the two different domains that we use and the mod_rewrite rules to make it all work with our SSL certificate that is only bound to one of the domains. I will list the use case scenarios below and the challenges that we are having. If you access http://support.domain1.com/support then it redirects fine to https://support.domain2.com/support If you access http://support.domain2.com/support then it redirects fine to https://support.domain2.com/support If you access https://support.domain1.com/support then it throws an error of "server cannot be found" If you access https://support.domain1.com/support/ after having visited https://support.domain2.com/support then you are presented with a "this connection is untrusted" error about the certificate only being valid for the domain2 domain instead of the domain1 domain name I have tried just about every mod_rewrite rule that I can think of to help make this work and I have not been able to locate the correct combination. I was curious if anyone had some ideas on how to make the redirects work correctly. In the end, we are needing all customers and staff to land at https://support.domain2.com/support regardless of the previous URL combinations that they enter, like listed above. Thanks in advance for your help with this.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >