Search Results

Search found 1770 results on 71 pages for 'stupid idiot'.

Page 49/71 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Firefox crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • Load balanced IIS. Should I use NLB, or linux-based reverse proxy, or something else?

    - by growse
    What would be the best approach for load-balancing at least 2-3 Windows 2008 R2 IIS webservers running a multitude of .NET applications? My choices appear to be: 1) Hardware-based network device load balancer, like a Cisco CSS 2) Windows NLB 3) Some sort of linux based proxy, either haproxy or other The three servers sit as VMs on a vSphere farm, so I have the ability to clone to up the instance count in times of high load. I control the switch that the vSphere hosts are plugged into (Cisco 3750), but don't control the switching/routing infrastructure beyond that to the clients. (1) Is too expensive, and probably overkill for my needs. I've included this in case someone figures out a cunning way to do it on my existing network kit, which I doubt. (2) would seem to be the obvious "built-in" option, but seems to be quite fiddly messing around with network interfaces, multicast, and generally other things that seem to be needlessly complex. It's also fairly stupid, in that it can't remove hosts from the pool if they start throwing 500 errors or otherwise go wrong (3) is the most interesting option, as it would appear to offer the most flexibility and customizability, but without having to mess around with the network. However, while I'm familiar with the reverse-proxy capabilities of lighttpd etc, I'm not that well read on other options like HAProxy, which might be able to offer a lot more. Which would you go for, and is there anything I've not thought of?

    Read the article

  • So who decided VGA cables should be symetrical ? [closed]

    - by mgb
    < rant So who decided that VGA cables should be the same gender connector at both ends? I just spent an hour trying to fix a server that would apparently boot but wouldn't display any post messages - I needed to change some bios settings. The monitor gives me a helpful error message - "unable to display this resolution". Wondering how it can't show simple VGA res, I reset the bios, when that doesn't work I remove the BIOS battery. The system is in a rack with a keyboard and monitor mounted 6ft away - with the cable running through the normal waist thick rats nest of wiring of wiring. The monitor worked with another system, so it and the VGA cable are good, I brought in another monitor and tried that - still nothing. Eventually I ripped the rack apart to discover that the other end of the VGA connector I was trying to plug into the server was actually plugged into another server box ! And the second monitor I was testing was plugged into the first monitor. An error message like - "I'm actualy plugged into another monitor stupid" - would be useful. So would having a connector with a male end and a female end. < end rant - thank you

    Read the article

  • Windows 7 boot issues

    - by Michael
    Ok, I tried to install linux and dual boot my laptop with 7 ultimate. I messed up. When I tried to boot to 7 it said no. Something along the lines of device not found. So I being young and stupid I uninstalled linux which I could boot into, and I still could not boot to windows. Next step was to run the startup fixes from the boot cd. Swing and a miss, I also ran the fixmbr and fixboot. Which brings us up to my current place. I installed 7 again on my blank partition in hopes I could access my other partion. No dice. So my question to yall is how can I fix my original filesystem or at least get to the stuff on it. In the new 7 install the old partion does not even have a drive letter. That is my sad story any help would be apreciated.

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • How can I find a list of all SSE instructions? What happens if a CPU doesn't support SSE?

    - by Blastcore
    So I've been reading about how processors work. Now I'm on the instructions (SSE, SSE2, etc) stuff. (Which is pretty interesting). I have lot of questions (I've been reading this stuff on Wikipedia): I've saw the names of some instructions that were added on SSE, however there's no explanation about any of them (Maybe SSE4? They're not even listed on Wikipedia). Where can I read about what they do? How do I know which of these instructions are being used? If we do know which are being used, let's say I'm doing a comparison, (This may be the most stupid question I've ever asked, I don't know about assembly, though) Is it possible to directly use the instruction on an assembly code? (I've been looking at this: http://asm.inightmare.org/opcodelst/index.php?op=CMP) How does the processor interpret the instructions? What would happen if I had a processor without any of the SSE instructions? (I suppose in the case we want to do a comparison, we wouldn't be able to, right?)

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • How to access remote network resource from local machine

    - by jerluc
    I just configured VPN access successfully so that I now can connect to my workstation at work from my personal Linux box at home. The problem is that all of my dev files for a server I'm locally running are on my personal box and cannot be transfered to my workstation (at least not in any timely manner over this connection given the amount of data, in addition to the many reconfigurations which would be required for the server to run even if I could somehow get the files across). So essentially, I am able to run my server locally on my personal computer, however, the data-sources required for the back-end are accessible only from within the office's network. But is there some way for me to somehow either access the data-sources directly through a VPN connection or even if I need to be a bit more convoluted by connecting via VPN to my workstation and then somehow connecting to the data-sources through my workstation to my personal computer? And here I could really care less about the speed of the connection from my server to the data-sources since they will probably only be fetched a few times every hour or so. Thanks! Sorry if this a stupid question and/or doesn't make any sense! (And sorry for anyone who read this at stackoverflow, I posted it in the wrong area.)

    Read the article

  • Setting up port forwarding for web server

    - by reyjavikvi
    This could belong on Super User, but I thought this place was more appropiate. I want to run Apache in my computer and want to make it available to the outside world to test a couple things. Apparently, I have to go into my router's (a TP-LINK TD 8910G) settings and forward port 80 to my PC's IP. So far so good. Thing is, since the router uses a web based interface and it's kind of stupid, it told me that since I was using port 80 for this, I should access its settings through port 8080. Maybe it can't detect requests coming from the LAN, I don't know. Point is, now neither port can't access the configuration, and I can't access Internet. Specifically, trying to access anything (including 192.168.1.1, the router's settings) through port 80 turns up a blank page (maybe if I had the server running in my computer I'd get something, but I don't want to risk trying, I had to reset the router and restore the settings), and port 8080 gives a "Can't establish connection" error in Firefox (and similar ones in other browsers). Is there a way to configure the router to not redirect requests coming from inside the network? I'm a beginner with this stuff, so please try to explain in a simple way. If this is more appropiate in Super User, I'm sorry.

    Read the article

  • FTP issues with Windows 2003 box and Filezilla

    - by vanhornRF
    We've set up a Windows 2003 server with IIS 6 and the FTP is Filezilla. I'm not a sys admin by any means and neither is my other developer. I'm trying to connect to FTP on a Mac with a Cyberduck FTP client. I'm running OSX 10.6.3. The problem is that I can connect to the FTP address, but when I do it will hang for a minute or two trying to list the directories. This address is pointed to the webroot folder so it's only trying to list the pertinent folders for the site we're working on. Eventually after a minute or two it will go through and list everything and then if you try and open a file or another folder, it will hang again and then eventually list the folders. My question: Is there some stupid default setting we're missing? Is this a common occurrence? Like I said I'm not a sys admin at all and I'm sure I'm missing some valuable questions for anyone that reads this, so please fire away if you can help and you need more info, I'll do my best to provide it. Thanks!

    Read the article

  • Flash and Google Maps - Only Last Icon showing

    - by Peter
    I have a simple Map and geocoding sample in Flash using CS4 The problem is simple - I can retrieve a short list from the google search api, but when I try to generate the icons on the map using a loop, only the last icon is displayed. (ignore the house icon, it is generated earlier) I feel I am missing something or made a stupid AS3 mistake (like treating it as if it was c#) - or even a stupid wood-for-the-trees mistake. The problem is in the last line of the code. I have added all my code just in case somebody else can find a use for it - lord knows it took me a great while to figure this out :) It runs here (also, if anybody has an idea why the icon is slightly in the wrong place on render, but corrects if you move the map - please let me know) Any help would be great. Thanks. P import com.google.maps.services.ClientGeocoder; import com.google.maps.services.GeocodingEvent; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapType; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.styles.FillStyle; import com.google.maps.styles.StrokeStyle; import com.google.maps.controls.* import com.google.maps.overlays.* import flash.display.Bitmap; import flash.display.BitmapData; import com.adobe.utils.StringUtil; import be.boulevart.google.ajaxapi.search.GoogleSearchResult; import be.boulevart.google.events.GoogleApiEvent; import be.boulevart.google.ajaxapi.search.local.GoogleLocalSearch; import be.boulevart.google.ajaxapi.search.local.data.GoogleLocalSearchItem; var strZip:String = new String(); strZip="60661"; var strAddress:String = new String(); strAddress ="100 W. Jackson Blvd, chicago, IL 60661"; var IconArray:Array = new Array; var SearchArray:Array = new Array; /*-------------------------------------------------------------- // The returned search data gets placed into this array ---------------------------------------------------------------*/ var LocalInfo:Array = new Array(); var intCount:int = new int; var intMapReady:int=0; /*=================================================================================== We load the map first and then get the search criteria - this will keep the order of operation clean. The ====================================================================================*/ var map:Map = new Map(); map.key = "ABQIAAAAHwSPp7Lhew456ffD6qa2WmxT_VwdLJEfmcCgytxKjcH1jLKkiihQtfC- TbcwryvBQYhRwHWa8F_Gp9Q"; map.setSize(new Point(600, 550)); map.addEventListener(MapEvent.MAP_READY, onMapReady); //Places the map on the page this.addChild(map); map.x=5; map.y=5; function onMapReady(event:Event):void { //Center the map and place the house marker doGeocode(); } /*========================================================================== Goecode to return the LAT and LONG for the specific address, center the map and add the house icon ===========================================================================*/ function doGeocode() { var geocoder:ClientGeocoder = new ClientGeocoder(); geocoder.addEventListener(GeocodingEvent.GEOCODING_SUCCESS, function(event:GeocodingEvent):void { var objPlacemarks:Array = event.response.placemarks; if (objPlacemarks.length > 0) { map.setCenter(objPlacemarks[0].point, 14, MapType.NORMAL_MAP_TYPE); var request:URLRequest = new URLRequest("house.png"); var imageLoader:Loader = new Loader(); imageLoader.load(request); var objMarkerOptions:MarkerOptions = new MarkerOptions(); objMarkerOptions.icon=imageLoader; objMarkerOptions.icon.scaleX=.15; objMarkerOptions.icon.scaleY=.15; objMarkerOptions.iconAlignment = MarkerOptions.ALIGN_HORIZONTAL_CENTER + MarkerOptions.ALIGN_VERTICAL_CENTER; var objMarker:Marker = new Marker(objPlacemarks[0].point, objMarkerOptions); map.addOverlay(objMarker); doLoadSearch() } }); //Failure code - good practice, really geocoder.addEventListener(GeocodingEvent.GEOCODING_FAILURE, function(event:GeocodingEvent):void { txtResult.appendText("Geocoding failed"); }); // generate geocode geocoder.geocode(strAddress); } /*=============================================================== XML Loader - loads icon file and search text pair from xml file =================================================================*/ function doLoadSearch() { var xmlLoader:URLLoader = new URLLoader(); var xmlData:XML = new XML(); xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("config.xml")); function LoadXML(e:Event):void { xmlData = new XML(e.target.data); RetrieveSearch(); } function RetrieveSearch() { //extract the MapData subset var xmlSearch = xmlData.MapData; // push this to an xml list object var xmlChildren:XMLList = xmlSearch.children(); //loop the list and extract the data into an //array of formatted search criteria for each (var Search:XML in xmlChildren) { txtResult.appendText("Searching For: "+Search.Criteria+" Icon=" + Search.Icon+ "Zip=" + strZip +"\r\n\r\n"); //retrieve search criteria loadLocalInfo(Search.Criteria,Search.Icon,strZip); } } } /*================================================================================== Search Functionality - does a google API search and loads the lats and longs required to place the icons on the map - THIS WILL NOT RUN LOCALLY ===================================================================================*/ function loadLocalInfo(strSearch,strIcon,strZip) { var objLocal:GoogleLocalSearch=new GoogleLocalSearch() objLocal.search(strSearch+" "+strZip,0,"0,0","","") objLocal.addEventListener(GoogleApiEvent.LOCAL_SEARCH_RESULT,onSearchComplete) function onSearchComplete(e:GoogleApiEvent):void { var resulta:GoogleSearchResult=e.data as GoogleSearchResult; //------------------------------------------------ // Load the icon for this particular search //------------------------------------------------ var request:URLRequest = new URLRequest(strIcon); var imageLoader:Loader = new Loader(); imageLoader.load(request); //------------------------------------------------------------- // For test purposes txtResult.appendText("Result Count for "+strSearch+" = "+e.data.results.length+"\r\n\r\n"); for each (var result:GoogleLocalSearchItem in e.data.results as Array) { LocalInfo[intCount]=[String(result.title),strIcon,String(result.latitude),String(result.longitude)]; //--------------------------------------- // Pop the icon onto the map //--------------------------------------- var objLatLng:LatLng = new LatLng(parseFloat(result.latitude), parseFloat(result.longitude)); var objMarkerOptions:MarkerOptions = new MarkerOptions(); objMarkerOptions.icon=imageLoader; objMarkerOptions.hasShadow=false; objMarkerOptions.iconAlignment = MarkerOptions.ALIGN_HORIZONTAL_CENTER + MarkerOptions.ALIGN_VERTICAL_CENTER; var objMarker:Marker = new Marker(objLatLng, objMarkerOptions); /********************************************************** *Everything* works to here - I have traced out execution and all variables. It only works on the last item in the array :( ***********************************************************/ map.addOverlay(objMarker); } } }

    Read the article

  • (manually configured) kernel update leaves wireless in a mess

    - by Mala
    I recently upgraded my kernel from 2.6.31-gentoo-r6 to 2.6.32-gentoo-r7. In both cases, I configured everything manually. However, since the upgrade, my wireless card appears to be on the fritz. It will connect to networks just fine, and remain connected, but can only access the internet (and other hosts on the network) for about 3 seconds after connecting. Reconnecting to the network appears to fix the problem... for another 3 seconds or so. The problem is "solved" by booting into the older kernel. The relevant lspci entry is 02:00.0 Network controller: Intel Corporation PRO/Wireless 5300 AGN [Shiloh] Network Connection I'm pretty sure I have the correct drivers enabled in the kernel Device Drivers ---> Network device support ---> Wireless LAN (IEEE 802.11) ---> <*> Intel Wireless Wifi [*] Enable LED support in iwlagn and iwl3945 drivers [*] Enable Spectrum Measurement in iwlagn driver [*] Enable full debugging output in iwlagn and iwl3945 drivers <*> Intel Wireless WiFi Next Gen AGN (iwlagn) [*] Intel Wireless WiFi 4965AGN [*] Intel Wireless WiFi 5000AGN; Intel WiFi Link 1000, 6000, and 6050 Series I tried with the other intel drivers enabled as well (iwl3945) and no difference. Is there something stupid I'm missing? Is there something I have to recompile after upgrading the kernel (a la nvidia)? Thanks Mala

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • Gnome-panel disappearance in Ubuntu 10.10

    - by jurchiks
    Just today, after about a week of somewhat normal running (I'm a total beginner in Linux and the level of amazingly stupid problems I encountered made me go nuts), today my panel disappeared (the one with Applications/System menus, you'd call it taskbar in Windows). Also, Alt+F2 doesn't work and Ctrl+Alt+Backspace has no effect (I'd think it's supposed to do something). I tried the solution posted here: Panel doesn't show at startup at Ubuntu 10.04 No luck, didn't change absolutely anything. I also couldn't find the .gconf and .gconfd folders using search, so couldn't try that option. There were ones that had same names but without the dot though, but there were several so I didn't risk. What could possibly be the reason for this? All I did yesterday was try to install some updates (another extremely dumb problem - doesn't allow to install even the official updates - "insecure sources" or smth like that, tried fixing it with some tutorials on the net but in the end it worked only for half a day and went back to refusal mode :@) and very few tools from the Ubuntu Software Center, but nothing that would change system settings just by installing it.

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • How can visiting a webpage infect your computer?

    - by Cybis
    My mother's computer recently became infected with some sort of rootkit. It began when she received an email from a close friend asking her to check out some sort of webpage. I never saw it, but my mother said it was just a blog of some sort, nothing interesting. A few days later, my mother signed in on the PayPal homepage. PayPal gave some sort of security notice which stated that to prevent fraud, they needed some additional personal information. Among some of the more normal information (name, address, etc.), they asked for her SSN and bank PIN! She refused to submit that information and complained to PayPal that they shouldn't ask for it. PayPal said they would never ask for such information and that it wasn't their webpage. There was no such "security notice" when she logged in from a different computer, only from hers. It wasn't a phishing attempt or redirection of some sort, IE clearly showed an SSL connection to https://www.paypal.com/ She remembered that strange email and asked her friend about it - the friend never sent it! Obviously, something on her computer was intercepting the PayPal homepage and that email was the only other strange thing to happen recently. She entrusted me to fix everything. I nuked the computer from orbit since it was the only way to be sure (i.e., reformatted her hard drive and did a clean install). That seemed to work fine. But that got me wondering... my mother didn't download and run anything. There were no weird ActiveX controls running (she's not computer illiterate and knows not to install them), and she only uses webmail (i.e., no Outlook vulnerability). When I think webpages, I think content presentation - JavaScript, HTML, and maybe some Flash. How could that possibly install and execute arbitrary software on your computer? It seems kinda weird/stupid that such vulnerabilities exist.

    Read the article

  • Is Unix a PC Operating system?

    - by Corelgott
    I have got kind of a stupid question. I am doing my bachelor at a university. In a wirtten assigment a prof posted the task: "Name 3 PC-Operating Systems:" Well, I went on an included a variety of OS (Linux, Windows, Osx) including Unix & Solaris. Today I recieved a mail from my prof saying: "Unix is not a PC-Operating System. Many Unix-Variants are not PC-Hardware-Compatible (like AIX & HP-UX. About Solaris: there was one PC-Compatible version...)" I am kind of suprised: Even if may Unix-Variants are Power-PC and different bit-order – Those don't stop beeing PCs right now? The question was given in a written assigment! It was not a question that came up during lecture! Due to the original postest task being in German, I'll include it just to make sure, that nobody suspects an error in the translation... "Nennen Sie 3 PC-Betriebssysteme:" Response / Antwort: "Unix ist kein PC-Betriebssystem, viele Unix-Varianten sind nicht auf PC-Hardware lauffähig (AIX, HP-UX). Von Solaris gab es mal eine PC-Variante." Anybody got something on that? Thx & Cheers Corelgott

    Read the article

  • Permanently deleting files on Mac OS

    - by Jonik
    A while back, as relatively new Mac OS X user, I was surprised to learn that you cannot easily delete files. Directly, that is, without moving them to the trash first. On Windows and Linux this can obviously be done with ease, but not so on the Mac. I noticed this when trying clear up files from a USB memory stick — removing the files ("move to trash") does not free up space; that happens only after emptying the whole system-wide Trash. Not particularly convenient! (It seems stupid to have to empty the whole trashcan just to make some space on the USB stick. There might be gigabytes of stuff in there, and this sort of defeats its purpose - what if you'd actually need to restore something from the trash some day.) So, what's your way of getting around this? Have you bought a 3rd party application like RAW Trash for $16.95 just to delete files, or do you diligently empty the trashcan whenever needed? Or did I miss something? Also, can you convince me that this is actually the way it should be — that users shouldn't be able to fiddle with the filesystem easily? :)

    Read the article

  • Acer Aspire One -- strange battery problem, charges only up to ~90%

    - by houbysoft
    I have this strange problem on the acer aspire one d250. It happened already once before, stayed for about two weeks, and then "fixed itself". The problem is as follows: the battery can't seem to get fully charged; ie the indicator is stuck at about 90% (it's probably not a software problem -- I have ArchLinux and Windows 7 installed and both report exactly the same) and it never passes that value, but it still shows the status as "charging" (I tried everything I could think of -- leaving it charging for extremely long amounts of time, doing a few complete charge-recharge cycles, removing/reinserting the battery, cleaning the connectors, even updating the BIOS, etc., and nothing helped). Also, when it is getting charged, it charges pretty fast until about 70% and then progresses extremely slowly. The battery holds the charge that appears on the battery indicator normally. Just can't get the battery to charge fully -- I can't get it past the 90%. At first I thought this would be a simple battery failure (even if the computer is not that old, about 6-7 months), but as I mentioned it happened once before, and then one day it fixed itself. I tried contacting Acer about this, but the support was not helpful, completely stupid, it seemed like they used canned responses, the usual. Any thoughts on how to fix this?

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Subdocument in Word won't save

    - by ChrisW
    Because I know Word has a history of not liking very large documents (my supervisor specifically told me not to use LaTeX... grr), I decided to learn the Master document / subdocument feature of Word when writing my PhD thesis. I have the title page / table of contents etc in the master document, and each chapter as a separate document. However, when I save the master document, it appears to save all the chapter documents apart from one (Chapter 4), for which it brings up the Save Document dialog box, helpfully with "Chapter4.docx" in the "Save as" box (n.b. Chpater4.dox is not open). Clicking save does nothing, and doesn't make the dialog box go away. Saving as a different document means that my changes aren't reflected in the same document. There must be some reason Word doesn't like this particular document but I've got no idea why - there's nothing special in it that isn't in any of the other chapters. I have tried closing all documents, renaming Chapter4.docx, opening the master document, expanding all documents, OKing the warning that Chapter4.dox does not exist, and inserting the 'new' document, but even when I save the master document it still won't save the new Chapter4 document. If anyone knows any reason why Word is acting like this (or if I'm doing anything stupid), I'll be eternally grateful (p.s. sorry for the long rambling message. It's late; I've been working on my PhD 4.5 years, I really really want to throw this computer out the window, and I hope people are kind enough not to downvote this question because of it's rambling nature!) Update With Word closed, I've tried to delete Chapter4.docx (having made a backup!) - but I get a warning that it can't be deleted because it's open in Microsoft Word... these files are on a network drive and the same problems are happening on 2 different computers. I could login to the filestore through ssh and force the file to be deleted, but I'm curious to know why this is happening!

    Read the article

  • reset locale in debian under Squeeze

    - by si2w
    I have problems with locale in debian. I tried many thing but it doesn't anything for me : locale -a locale: Cannot set LC_CTYPE to default locale: No such file or directory C POSIX en_US.utf8 I try to set en_US.utf8 without success with this :dpkg-reconfigure locales -plow perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory /usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory /usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). After reboot, i try to use a perl script : perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Here is my /etc/default/locale config file : cat /etc/default/locale LANG=en_US.UTF-8 LANGUAGE=en_US Any idea to solve this (stupid) problem ? Thanks

    Read the article

  • Installing and running two postgresql versions on different ports (or two instances of same server)

    - by Andrius
    I have postgresql 9.1 installed on my machine (Ubuntu). I need another postgresql server that would run next to the old one. Exact version does not matter, but I'm thinking of using 9.2 version. How could I properly install and run another postgresql version without screwing old one (like upgrading). So those versions would run independently on different ports. Old one on 5432 and new one on 5433 for example. The reason I need this is for two OpenERP versions databases. If I run two OpenERP servers (with different versions) on single postgresql port, it crashes because new OpenERP version detects old versions database and tries to run it, but it crashes because it uses another schemes. P.S or maybe I could just run same postgresql server on two ports? Update So far I tried this: /usr/lib/postgresql/9.1/bin/pg_ctl initdb -D main2 It created new cluster. I changed port to 5433 in new clusters directory postgresql.conf file. Then ran this: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I got response server starting. But when I tried to enter new cluster's template database with: psql template1 -p 5433 I got this error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5433"? Also now when I try to stop server with: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I get this error: pg_ctl: PID file "main2/postmaster.pid" does not exist Is server running? So I don't understand if server is running and what I'm missing here? Update Found what was wrong. Stupid me. I didn't notice that when I changed port in .conf file, that line was commented already. So actually I didn't change anything first time, but thought I did and it used default 5432 port.

    Read the article

  • How should I configure nginx caching headers for a "baked" static file blog? (Octopress)

    - by Doug Stephen
    I recently deployed an Octopress blog (which is a blogging platform built around Jekyll). It's a static-site blog generator, with no dynamic content or databases to muck about with. It's being served up by nginx. My question is, what is the appropriate expires directive or Cache-Control header that I should set to make sure that visitors get the most up-to-date version of the site when they visit without having to manually refresh? Since the site is just .html files it seems to get cached pretty aggressively. I've tried a million different combinations of expires modified + xxxx and even straight up expires off but I can't seem to wrap my head around it. I'm very new to dealing with caching like this, specifically, on static files that change frequently, and obviously if the site hasn't been changed then I'd like for it to be served up out of the cache. Update (still not solved though): I found open_file_cache, tweaked that. Still no dice. It seems like what I might want to do is use nginx as a proxy cache and use Apache with ETags? Is there really no convenient way to make nginx play nicer with conditional requests from the client? TL;DR: I'm running a static-file blog and I'd like to set up nginx to only serve from the cache if the blog hasn't been updated recently, but I'm too stupid to figure it out myself because I'm relatively new to web servers.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >