Search Results

Search found 18916 results on 757 pages for 'sharing nothing architect'.

Page 583/757 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Help in recovering partition

    - by goshopedero
    Okay so i had one NTFS partition and i wanted to resize it, but while resizing it with partition magic some error occurred and now i am not able to enter in my partition anymore. I have slackware 13 also and i tried mounting the partition from there but it didn't succeed. One friend of mine came to my house with some live-cd os called backtrack3 and when he booted from cd, he was able to mount the damaged partition - and was able to read/write on it anywhere. I saw my files, they are all there, so nothing's erased just the partition is somehow damaged. But strange thing was that from backtrack we weren't able to mount some of the working partitions of my comp, and we could mount the damaged one. So i am asking for some help here: My files are all there, and i saw them from backtrack. What can i do to fix the partition so it would be usable from windows/slackware again ? Please tell me anything you've got because i have some important data on it. Thank you.

    Read the article

  • Isn't a hidden volume used when encrypting a drive with TrueCrypt detectable?

    - by neurolysis
    I don't purport to be an expert on encryption (or even TrueCrypt specifically), but I have used TrueCrypt for a number of years and have found it to be nothing short of invaluable for securing data. As relatively well known free, open-source software, I would have thought that TrueCrypt would not have fundamental flaws in the way it operates, but unless I'm reading it wrong, it has one in the area of hidden volume encryption. There is some documentation regarding encryption with a hidden volume here. The statement that concerns me is this (emphasis mine): TrueCrypt first attempts to decrypt the standard volume header using the entered password. If it fails, it loads the area of the volume where a hidden volume header can be stored (i.e. bytes 65536–131071, which contain solely random data when there is no hidden volume within the volume) to RAM and attempts to decrypt it using the entered password. Note that hidden volume headers cannot be identified, as they appear to consist entirely of random data. Whilst the hidden headers supposedly "cannot be identified", is it not possible to, on encountering an encrypted volume encrypted using TrueCrypt, determine at which offset the header was successfully decrypted, and from that determine if you have decrypted the header for a standard volume or a hidden volume? That seems like a fundamental flaw in the header decryption implementation, if I'm reading this right -- or am I reading it wrong?

    Read the article

  • D-Link wireless router losing outbound data

    - by gsteinert
    I have a Linux box running the Apache web server behind a D-Link wireless router (nothing fancy, just standard kit that comes with Virgin Media broadband). My issue is that when requesting web pages (from within the network or via the web), the back end of the page seems to be being dropped. For example, I tried to display a text-only file, and all I could get was the first 40-70% of the file (it changed slightly with each refresh). The apache access logs show that only part of the data was being sent (~6000 bytes instead of the 12000+ bytes of the file). Removing my router from the equation fixes the issue and I can download any files no matter the size with no problems. My theory is that the uploaded packets are either being dropped or held up by the config of the router. Is there anything I can do to alleviate the problem? (Perhaps a way of reconfiguring the router to upload packets harder/better/faster/stronger or an option in apache that provides a workaround) As a last resort I will get a second NIC for my Linux box and turn it into a router, but that would mean the box will be on 24/7... not the most ideal of circumstances. Gary

    Read the article

  • How do you permanently disable the 'This Connection is Untrusted' page on Firefox

    - by TheIronChef9
    I'm going insane. Can someone please help me to COMPLETELY DISABLE the 'This Connection is Untrusted' page on Firefox. Facts: I am running Firefox 23.0 on an Ubuntu machine (downloaded and installed ubuntu today) It is a work computer and I have to use my employer's proxy While visiting Webpages/webapps like Gmail or Google brings up the 'This Connection is Untrusted' page and I have to go through the whole tedious task of selecting 'I understand the Risks' and add Exceptions, etc. etc. The fact is, I don't care about the risks. I would rather this computer melt into the ground than have to see that page ever again. I want to dance naked in untrusted pages and not give a damn about the consequences. I just never want to see that page again. Ever. For some sites (eg. wikipedia), the css doesn't load and I end up seeing them in plain text. As a result these sites are completely useless. Wasted hours trying to solve this for stackoverflow.com. These issues happen on the Firefox on my Windows XP machine as well (also using the same proxy). I don't want to export/import certificates or create exceptions for every site that shows this bloody page. I just want this page gone. I don't want Firefox to tell me what's safe and what's not. Also, my system time and date are correct. I've also tried the lies on this page too with no good results. Edit: I've also tried the whole going into the Advance-Certificates-validation setup page and unchecked 'Use the Online Certificate Status Protocol (OCSP) to confirm the current validity of certificates' checkbox. Nothing happened even after restarting firefox or rebooting. I need help. Thanks.

    Read the article

  • How do I know if I managed to completely remove an undetected trojan?

    - by ubuntuisbetter
    I catched a trojan that uses explorer.exe to reproduce itself in case of deletion of its autostart entry or main exe file in Programs/x. It had already tried to contact a suspicious server over explorer.exe, blocked that via my firewall. I: Removed the autostart entries from the registry Looked through my services if there was anything suspicious Deleted the trojan from Programs/ Went through System Volume Information to find a 2 month old explorer.exe and replaced the possibly infected one. There are no suspicious processes running now anymore (no duplicate explorer.exe) and nothing wants to connect this trojan owners sever either. I checked my system with several anti-malware programs too. What the trojan did: Started a second explorer.exe Always when I deleted the main trojan exe file it was reproduced (by the second explorer.exe) Always when I deleted the autostart entry it was reproduced by the explorer.exe too. When I terminated the suspicious explorer.exe, which used only half as much memory as the less suspicious one from Windows, a strange thing that I know from the computers in my Informatics class happened: A window popped up in the top left of my explorer-less desktop, titled "Personal settings for ... are ..." that obviously copied some files. Then both explorer.exes started again and the trojan was everywhere again. What did the trojan actually do to get explorer to rescue it? Is my PC clean of this newish trojan now? What are the other locations I should check for the trojan? The trjoan doesn't seem very high-level, could it have changed other system files or is the autostart entry vital for it? Thanks in advance, Your trojan paranoid friend (Getting linux in a week)

    Read the article

  • apache access and error log written in same file

    - by user196075
    i have issue that access and error log are written in same file ! , the configuration in virtualhosts.conf as the following : <VirtualHost *:80> ServerName ************ ServerAdmin support@************8 DocumentRoot /var/www/html/*********.com ErrorLog /var/log/httpd/********/********.com_error_log CustomLog /var/log/httpd/********/********.com_access_log combined <Directory /var/www/html/***********.com> Options -Indexes FollowSymLinks AllowOverride All </Directory> </VirtualHost> as you see from the configuration each access and error logs should be save separately , but both logs are written in *.com_access_log , i have double check all permission , group and owner ... can't find anything wrong previous error in log file : [Thu Sep 19 14:15:02 2013] [error] [client 192.168.10.54] client denied by server configuration: /var/www/html/**********/show_has_offers.php i have tried to generate same error , i can find the hit in access log only as the following : 192.168.10.75 - - [24/Oct/2013:08:11:14 +0000] "GET /show_has_offers.php HTTP/1.1" 404 1586 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0" 0 17332 and nothing in error log !! Please advice ...

    Read the article

  • Why am I getting programs stuck in log_wait_commit under Linux?

    - by staticsan
    There is something subtly wrong with my Linux install that I just can't locate. It is Ubuntu Lucid Lynx (10.04) 64-bit. Hardware is a Dell Optiplex 960: Intel Core 2 Quad CPU, 8Gb of RAM, 2x 300Gb HDDs. /home is ext2 on one disk and everything else is on the other (/ is also ext3). I have VirtualBox running a 64-bit Vista image for Outlook calendaring, but the heavyweight apps are IntelliJ, NetBeans, MySQL and Opera. Opera also loads my mail (IMAP) of which there is over 10,000 messages. The problem is that Opera stalls for a few seconds from time-to-time. Watching the process list shows it's in log_wait_commit which means (as far as I have figured out) the filesystem is holding things up. Sometimes I can make this happen by doing a subversion update, but usually it happens for no reason I can see. It usually happens to Opera, but I've seen NetBeans go under, too. It doesn't make the app crash - it's just completely unresponsive for a few seconds. Googling has not helped. The closest I got was to remove the sync attribute in the file system. This achieved nothing. On the advice of a Linux guru friend, I lowered /proc/sys/vm/dirty_writeback_centisecs to 300, but that didn't do anything, either. And it was all he could think of. What is going on and can I fix it? (And how?)

    Read the article

  • Clean installation of RHEL 5.5 claims package "desktops" is missing

    - by TKguru42
    Hi all, I'm a student worker in the CS department of my university, so please forgive me for any unprofessional descriptions. Simplified explanations are appreciated. I recently replaced some bad graphics cards in a few public workstations. The machines are all the same model. Before putting them back on the network I did fresh installs of RHEL---first I tried 5.4, but yum update ran into all sorts of ugly dependency errors and if I tried to remove any of the problematic packages, the whole operating system FUBAR'd. Using RHEL 5.5 gave me the same errors during install saying that "java.1.5.1-sun*" and "desktops" were missing, but yum update didn't have any dependency problems. Now that I tried logging in through the GUI, I encounter no GUI past the standard RHEL login page. The desktop is a uniform light teal and there's no system tray. An xclock window and an xterm window are open, and Firefox opens automatically, but that's it. Nothing else. What's REALLY confusing is that the computer claims that gnome is already installed, except it clearly isn't working. Any help or advice is greatly appreciated. If it helps, our department uses kickstart to run our standard Linux installs. I can try to get the script if that would be of use. Thank you!

    Read the article

  • Run 3 monitors on two different video cards?

    - by hullot
    Can I run 3 monitors on two different video cards? I have an ATI and Nvidia brand card. The ATI has 2 HDMI connections. They both work. Both cards are also picked up in Windows, one being the ATI and the other one as the Nvidia, but it says VGA Controller, although the card only takes 2 DVi. So, one DVI cable goes into that Nvidia card. 3 Monitors, but only 2 the HDMI ones from the ATI pick up, not the third one which is connected to the Nvidia via DVI. How can I run three monitors then? I suppose I can't install both drivers, so I'm unsure what to do. Is this possible? I just want the Nvidia card to power the third screen, no gaming on it, nothing. Also the ATI is picked up as primary card as well, so no hurdle there. EDIT: Hm, just installed the Nvidia drivers and it picked up the third screen no problem. Hope there aren't any major conflicts. Will post this as an answer as correct when I'm able. Can't as a new user.

    Read the article

  • How does everyone set up AWS for PHP with a git workflow while worrying about distributing EC2?

    - by Parris
    Hello, I have been looking for something like heroku but for php, and after much frustration (and almost finding what I need, but not quite) we decided to just go with AWS without any other abstraction. We are using PHP 5.3 (and CakePHP 1.3), and are currently using git. Ubuntu seems like the easiest way to get both of those on there and we will most likely use that. We aren't really going worry about outgoing email. We are using smtp through gmail, but will most likely switch to some other service eventually. I had 3 questions: 1) I have been looking at Zend Server, and I am not quite sure how that is more beneficial than xampp. Perhaps it is not? 2) I suppose to make the application scale we would need multiple instances of some ec2 ami. Then just duplicate it and such. The question then becomes how do we make sure all EC2 instances are up to date? 3) I understand the concept of load balancing to some degree. I understand that in 1 region you select a bunch of servers and have it load balance across them. The question then becomes well how about world wide? How do I make it so that traffic is directed to the correct ec2 server? I have heard of route 53, and tried signing up for that, but nothing appears in my control panel. Also perhaps it is just a DNS thing with my domain registrar? AHHH... some tutorial would be helpful!

    Read the article

  • Is 40+ Logons on Exchange 2003 per user normal?

    - by cbsch
    Hello! We've had a problem at work where users sometimes randomly can't connect to exchange. I've found out that it's because they reached the limit of 32 concurrent logons. I increased the maximum allowed connections by adding the key "Maximum Allowed Sessions Per User" in HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem. But I'm not sure if this is a real good fix. Looking at the logons some users has as many as 15 logons with the exact same logon time. I know for sure that Outlook 2007 does this, as I was watching them while a user connected with Outlook after a restart on the Exchange service. Every user also has an iPhone connected to exchange, I don't know if these cause the same thing. Is this normal? Could there be a bug in the software? (The Outlook 2007 has nothing configured, except added the user, pure vanilla installs). The users are mobile, and when Outlook generates up to 15 connection every time it connects, and I've read (no sources, sorry) that Outlook doesn't time out connections before 2 hours. I might have to set this number real high to prevent it from being a problem.

    Read the article

  • hg clone has stopped working on my Vista box

    - by vkraemer
    I have a Windows Vista machine that has been connecting to http://hg.netbeans.org productively for awhile... until recently. Lately, when I attempt to pull or clone, the update appears to stall... I see the following messages on the screen when I attempt to clone: destination directory: web-main requesting all changes adding changesets And then... nothing happens. I have opened the Task Manager and there doesn't appear to be any significant network activity for HOURS. I can contact the server with FireFox and see the proper output. I can clone from the repo with Solaris and/or Mac OS X... so the issue doesn't appear to be at the 'other end'. I had been running a fairly old version of Mercurial before this started happening. After it started happening, I upgraded to Mercurial 1.5.2.. which did not help resolve the issue at all. What are the likely causes and work-arounds for this?

    Read the article

  • Should I be able to Windows Phone 8 emulator running with this AMD config

    - by pete
    I have just bought a new system as follows * AMD A4-5300 Trinity 3.4GHz * Gigabyte GA-F2A55M-DS2 Motherboard I checked and the A4-5300 supports Virtualisation. I have run the SLAT checking tool and it says that my machine supports SLAT. Been into the BIOS and ensures that SVN is enabled. I have also run the coreinfo tool that indicates that SVN is not valid fort the machine nor is NPT.. not sure why this conflicts with the true BIOS setting (I checked with GIGABYTE and seems that NPT and NX are not options that can be set in the BIOS) But.. when I run the emulator I it hangs - I can see some activity in Hyper-V manager, but it doesnt get very far and no meaningfull error messages. Been working the plethora of suggestions to get around this, but so far, nothing has worked. My question is should this setup allow the WP8 emulator to run. I am positive the CPU supports virtualisation, but are there issues with the motherboard here? I have spent a huge amount of time on this so far....am I wasting my time with the current configuration, or should it work? thanks

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • Virtual Server HDD shrinks without apparent reason

    - by Christian
    We have a virtual hosted Linux server, and in the last few months every now and then the HDD shrinks from 400GB down to the exact byte count that is in use. All existing data can be downloaded and displayed without a problem, but we can't upload or edit any files because of the "full" hard drive. Here is a screenshot, where "size" should be 400GB: This has happened twice before, and again today. The last times, when I reported the issue to the host, they said "that isn't possible, you must be doing it wrong", but soon after the call, the problem vanished without us doing anything, so I suppose that they have some kind of problem they're not willing to admit. Even after the fact, they acted like nothing was wrong and wrote me a mail in which they explained that I can use "df -h" to view available disk space (well duh, how do you think I noticed this particular issue?). Questions about if and what they had done were ignored. It has happened around the 25th to 28th of the month, so I suspect that they might have a cronjob running every 30 days or so which wreaks havoc with some VM configs. I just want to understand the problem, but the host support hasn't been very helpful in that regard. I have tried Googling the issue, but any combination of search terms I can come up with just gives me tutorials on how to change HDD size in a virtual machine. a) What could be the cause of shrinking HDD size in a Ubuntu 12.04.3 LTS server? Could there be anything in our virtual machine or is it more likely to be an issue with the vm host? b) Can I do anything about it without needing to contact the host's support? c) Is there anyway I can prevent this from happening at all?

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • mod_rewrite with AJAX applictions: possible?

    - by MrJackV
    I am trying to run Shell In a Box (link) through another server (the computer running shellinabox is not accessible from the internet) . Ideally I could use ProxyPass in the Apache config to have a reverse proxy. Problem is I can't access the conf file. So I tried using .htaccess and I discover that I cannot use ProxyPass in there. So I tried and used mod_rewrite to do the job. Currently I have the following on the .htaccess file RewriteEngine On RewriteRule ^$ http://10.1.13.236:4200/ [P] However while it displays the title correctly and if I open up the source code I can see there is something in the page, nothing is diplayed on the screen (it remains blank). My suspicion is that there are problems with AJAX and this kind of proxy. What I am trying to accomplish with the mod_rewrite as close as possible behaviour to ProxyPass (Mirorr a website in a subdirectory). Is this possible? Is there some other solution (I tried phproxy and khproxy but neither of them is able to display anything)? Thanks in advance

    Read the article

  • Windows 7 pc freezes for an indeterminate amount of time after unlocking

    - by pikes
    Not sure if this type of question is appropriate for this forum, but I've tried everything I can think of to solve this problem aside from format/reinstall. I recently got a new work PC (Dell optiplex 755) with windows 7 professional x64. Standard developer software installed for .net development: VS2008, VS2005, SQL management studio, office 2007, etc. Recently I've been having this weird problem where after I lock my pc, when I try to unlock it, the screen will be black for awhile after unlocking. I can ctl+alt+del and put my password in but then it just goes black. The amount of time on the black screen seems to be related to the amount of time I am away from my PC. If only away a few minutes, it'll take about a minute to get to the desktop. If away for an hour, could take up to 15 minutes. If I lock it and go home for the night, I have to restart my PC in the morning (I've let it sit for an hour after a night of being locked and nothing happened). It doesn't do it every time but definitely the majority of the time. One weird thing I've seen is that if I remote into my machine before trying to log back in it does not do it. I uninstalled all software back to the point when I remember it started happening and it still does it. I was using this PC for a few weeks without this problem happening at all. Anyone know what my next troubleshooting steps could be? My IT department tried to fix it by moving my old profile to another disk and having me log in, effectively recreating a profile from scratch but that didn't solve it. As I said above if this isn't the right forum for these types of questions please let me know. Thanks in advance!

    Read the article

  • Ubuntu server 10.04 disconnects after short periods of inactivity on my site

    - by user57019
    I'm new to Ubuntu (installed it for the first time just a couple of days ago on my server). I've Ubuntu Server 10.04 and am just using the terminal, no GUI like Gnome. So far it's working pretty great except for one big thing. Whenever I go to sleep and there's no activity on my server (it's not a big site so active users drop to 0 during the night), the server kind of disconnects. The only thing that can bring the site back online is to restart the whole server. I've tried disabling powersaving by using setterm but that changes nothing. Even if I wake up the server by pressing any key or so the site wont go back online! I've tried just restarting both Apache and MySQL (I'm using LAMP-server btw) but not even that works. But as soon as I turn the power off and on at the server, everythings work like normal for a couple of minutes of inactivity (~5-15 minutes I'd guess) and then it's down again unless someone logs in to the site and is active. I was previously using XAMPP on my laptop with Windows XP and that worked 24/7 so I don't think it's anything with my router or ISP. This is driving me crazy! My site is down all the time I'm in school as I have no possibility to restart the server if it becomes offline. Does anyone have a clue to what could be wrong?

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

  • I can't change mysql port (5.6.12) changing the lines of my.ini (windows 8)

    - by videador
    I was trying to change the port of my mysql server in my local machine but i can't. The version of mysql is 5.6.12, is an installation from wamp and I am on Windows 8. I change these lines in my my.ini file located in (C:\wamp\bin\mysql\mysql5.6.12). [client] #password = your_password port = 3307 socket = /tmp/mysql.sock [wampmysqld] port = 3307 socket = /tmp/mysql.sock key_buffer = 16M max_allowed_packet = 1M The previous values were 3306. Ok then I've reset the server installed, but it doesn't works, the mysql server is still running on 3306. Then, I rename the path of the services with this, to make sure that the my.ini is read by the mysql instance. c:\wamp\bin\mysql\mysql5.6.12\bin\mysqld.exe --defaults-file="C:\wamp\bin\mysql\mysql5.6.12\my.ini" wampmysqld But nothing, it stil doesn't works. My last bullet was to copy the content of my.ini to a file my-default.ini (a file that is placed in C:\wamp\bin\mysql\mysql5.6.12\ and that I don't know what is its mission). However it still doesn't work and the port is still 3306.

    Read the article

  • Pc sometimes turns on sometimes not

    - by cprogcr
    Some time ago, the PC gave the same problem. It wouldn't turn on. When i pressed the button, it turned on but showed nothing. I had replaced the CPU and that seemed to work. I didn't use the PC that much, rarely you know. But now, after some time, it gives the same problem. It turns on, the front light is on, it makes the normal noise the pc makes when it's turned on , but if I try to shut it down by holding the power button it just doesn't work. So again, I tried replacing the CPU and it worked again. I kept it all day working, just to be sure, and sometimes I would restart it and it would work again. No problems at all. So I turned it off at night, and next morning it just would make the same problem. So I tried replacing the PSU. And it worked again. Now while I had the PC with the new PSU, i tried to insert the old CPU, and again, it would turn on. The same thing, tried restarting too, and it would work. But this morning the same problem happened. Edit: I also tried another CPU today and yet no signs of working. I don't know now what to think.

    Read the article

  • Explain why .bash_logout won't run commands?

    - by Droogans
    So I've been wondering how to run these two lines of code everytime I close an open instance of Terminal: history -c cat /dev/null > ~/.bash_history I export HISTFILE=5 on startup, but still want to flush that out when I'm done. I've tried looking around a bit in a couple of places, and haven't had much luck. I run Linux Mint, and would also note here that I ran into a similar issue with .bash_profile; eventually, I discovered I needed to place all start up code in .bashrc, so maybe that has something to do with it. Here's my .bash_logout file: #!/bin/bash # ~/.bash_logout: executed by bash(1) when login shell exits. # when leaving the console clear the screen to increase privacy if [ "$SHLVL" = 1 ]; then history -c cat /dev/null > ~/.bash_history [ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q fi #this does nothing on exit... echo 'logout'; sleep 2s I've tried re-arranging this script many ways, I'm not sure if I don't understand how bash works, and if any of this is running in the first place. Does the fact that I run Xserver make bash consider Terminal something that isn't a log-out on exit?

    Read the article

  • Java games applet not connecting to Yahoo

    - by Steve
    Hi. I am trying to play a Y! game which use a java applet. The applet displays the message: Alert. Unable to connect to server. One of four things could have caused this: 1) You are behind a firewall. 2) You are not connected to the internet. 3) The games server is down. 4) You have a stale page in your cache. I have added an exception to the Windows firewall for java.exe. I am obviously connected to the Internet okay. The games server is not down when I am at home. I doubt it is down when I am at work. I have never successfully loaded this page before, so I doubt I have a stale page in cache. Could it be the corporate firewall? Nothing else in my web browser has been blocked before. Maybe the java applet connects on a different port to the browser. What should I test?

    Read the article

  • Ubuntu 12.04 can't boot after installing with software RAID 1

    - by Bill
    I've been trying to install Ubuntu with software RAID on my server and there is obviously something that I don't understand about the process. This is the guide that I followed: https://help.ubuntu.com/11.04/serverguide/advanced-installation.html I have two identical 1 TB disks in my server. I went through the initial install process and manually set up my partitions. On each disk I set up: (1) 100 MB partition for EFI boot (I didn't originally have this but added it based on a forum post I found after my original install failed to boot, I ended up with EFIboot since that was what the 'guided partitioning' decided to do) (1) 970 MB partition for / (1) 30 MB partition for swap I then created new RAID 1 disks combining the two partitions, one from each disk, such that each partition is mirrored. I then configured their usage as stated above. After saving the configuration I said yes to boot in a degraded state. The rest of the setup went normally, no errors of any kind. I saw GRUB being installed and again no errors. However, after rebooting the server I get the dreaded 'Insert boot media' and nothing happens. I loaded up the recovery disk and the mdadm configuration looks correct. md0 is my EFIBoot partition md1 is my \ partition using ext4 md2 is my swap partition Running file -s /dev/md0 doesn't indicate that GRUB is there and so I attempted to reinstall GRUB using the recovery disk. I selected the md0 disk and it appeared to install just fine. Running file -s /dev/md1 shows the error needs journal recovery, I'm not sure if that's related or not or how to fix that. Rebooting gives me the same problem, no boot media found. I've searched around the internet but can't figure out what to do next or more importantly how to troubleshoot what exactly is going wrong. Thanks!

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >