Search Results

Search found 26434 results on 1058 pages for 'folder options'.

Page 276/1058 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • Installing Bugzilla on Ubuntu 9.04 and Plesk

    - by makeflo
    Hey guys. I'm trying to install the latest Bugzilla version on my ubuntu server. (Want to use a subdomain like bugs.domain.com) I already installed all necessary perl modules and check_modules.pl doesn't show any errors. But when I'm running the testserver.pl script I get the following: TEST-OK Webserver is running under group id in $webservergroup TEST-FAILED Fetch of images/padlock.png failed I'm also not able to visit ANY file within the bugzilla folder from the browser. I'm always getting a 404 error. The bugzilla folder and all containing files are set to apache as the owner. I tried to enter the apache configuration form the installation guide in the http.include file of the domain and in the vhosts.conf file of the subdomain as well. I don't know what to do... Playing with plesks' suexecgroup doesn't bring any solution... I hope you can help me! Thanks in advance!

    Read the article

  • What resources are there for creating a dedicated NES emulator box?

    - by normalocity
    Where do I start, and what communities should I get involved in, in order to achieve the following? Ideally, I'd like to have a box that does the following (doesn't have to do this out of the box, I'm just looking to be able to achieve these goals through configs and necessary dependencies): Either bypasses login, or auto login Auto-start FCEUX with options that will (a) automatically start a ROM of my choosing, and (b) go into full-screen mode. You can assume that before I get that far, I've already configured the input devices and video options. I'd like to create (or install, if it exists) a full-screen app that takes a list of ROMs, allows me to select one with a gamepad/arcade stick, and press a button to open that game Be able to map a button on a gamepad/arcade stick to the "Power off" or exit function of the emulator, such that it will take me back to the ROM selection screen. I've already successfully installed FCEUX and tested it with an arcade stick I own, so I'm not looking for an emulator installer guide. I don't know if the ROM selector app exists already, but I'm a Java developer, and could probably create one (so long as it's not too difficult to support controllers - I was thinking of using Slick2D for this - a gaming library that I'm already pretty familiar with). The goal would be a dedicated box that I have connected to my TV. I power it on. It boots up and starts the ROM selection app, which passes the proper parameters to FCEUX (or another emulator that I might switch to at a later time), and I'm ready to go. Basically an NES emulator as a real, living room console. Also, as far as mapping a controller button to functions in the app, well, I've also played around with hardware, and it would be pretty trivial for me to modify a gamepad to trigger key presses. I just don't want to go to that length if it's not necessary.

    Read the article

  • Having trouble getting startup scripts to work in Server 2003

    - by Az
    Thanks for taking the time to read this. I am having trouble getting my startup scripts to run correctly on the domain I am administering. Before anyone gets upset and says "go read xxx article from microsoft",... I have. I am simply missing something or not understanding it properly. I understand how to assign the script, what I am curious about is where exactly it should be placed in the Windows folder structure. I have been able to get them to work by creating a share folder called "scripts" and pointing to that exact unc pathname IE \servername\scripts\xxx.bat. However, I would like to do it properly, would someone please tell me where they should be placed in Win2003 server, and what the path name should be when assigning a group policy to do something that applies to computer specific properties? Your assistance is very much appreciated by a junior admin trying to learn some new tricks!

    Read the article

  • App installed in ~/usr launches from terminal but not Applications menu (or why does setting ld_library_path in .profile not work as it should)

    - by levesque
    I have built and installed an application under a directory of my choosing, let's say under /home/jim/usr, so files have been put in three-four folders, all under this $HOME/usr folder (e.g., bin, include, lib, share, etc.). I can launch this application from the command line just fine as I added the proper paths to my environement variables PATH and LD_LIBRARY_PATH in ~/.bashrc. I added the same paths to the ~/.profile file, which, if I'm not mistaken, is supposed to be parsed by Ubuntu. Doesn't work. Nothing. Where can I go from there? EDIT: I logged out/in and restarted my computer. Both didn't change a thing. The problem seems to come from the fact that no matter what I do the LD_LIBRARY_PATH environment variable is not properly passed to Ubuntu. Using log files, I found that the application I'm trying to run in this example doesn't find one it's dependencies located in ~/usr/lib. One solution would be to add the /home/jim/usr/lib folder inside a file located in /etc/ld.so.conf.d/, but I don't have admin rights on this machine. Making a wrapper script like this one works: #!/bin/bash export LD_LIBRARY_PATH=$HLOC/usr/lib application &> $HOME/application_messages.log but that would force me to wrap all my home compiled applications with this script. Any ideas? Why does Ubuntu/Gnome remove the LD_LIBRARY_PATH environment variable from my set variables? Is it because trying to do this is bad practice? UPDATE (and solution): As found by Christopher, there is a bug report about this on launchpad. LD_LIBRARY_PATH is unset after parsing of the ~/.profile file. See the bug report. Seems the only solution for now is to make a wrapper script.

    Read the article

  • Limit HTTP VERBS on Apache2

    - by user72295
    I am trying to limit the use of certain HTTP verbs on my site. I entered the following into my VirtualHost config file within the Directory element: <Limit GET POST HEAD> Allow from all </Limit> <Limit PUT DELETE OPTIONS> Deny from all </Limit> This seemed to work but with unexpected results: I ran the following telnet/HTTP commands before and after this change, open server 80 OPTIONS server/abs_path HTTP/1.1 User-Agent: Telnet/1.0 Host: server before the change I received a successful response with the Allowed headers. After the change, however, I was expecting to receive a 405 'Method not allowed' response but rather I received a 403 'Access Forbidden' response. What do I need to change in apache to return the 405 HTTP response? Many thanks

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • Routing a url to fetch content from another site

    - by Abhishek
    Environment: IIS 7. I have a default site www.domain.com and its folder is C:Inetpub/wwwroot/domain There is subdomain www.subdomain.domain.com and its folder is C:Inetpub/wwwroot/domain/subdomain. Now I have setup a new website at an external server. I cannot put the content on the above server due to some reasons. I need the URL www.subdomain.domain.com/blog fetch content from this external server while the URL should remain the same. How could this be achieved in IIS 7?

    Read the article

  • SEO + international sites? country.domain.com or domain.country?

    - by Pure.Krome
    Hi folks, is it better to have seperate country specific domains (which costs more money) or subdomains which define the country, for better SEO? eg. stackoverflow.com stackoverflow.com.au stackoverflow.co.uk vs stackoverflow.com au.stackoverflow.com uk.stackoverflow.com Assumption: int the search engine web master tools, each subdomain are associated to a country. eg. au.stackoverflow.com is associated to the country Australia. cheers! Update I understand that both methods do work, especially when i utilize the assumption, listed above. The question is about: Which method is better? Is there such a small SEO difference between them? Is the first method way way way better than the second with getting better SEO results? Update #2 A number of folks have suggested that the following is a good/better approach: stackoverflow.com/ stackoverflow.com/au stackoverflow.com/uk By adding a country specific iso code to the end of the url/the first folder of the domain can be recognised as the country. But a number of SEO mates have suggested that this is a valuable waste of folder level space. Er.. how can i explain. Ok, it's been suggested by some SEO experts that if the number of levels or folders in the domain exceeds 5 then the page drops dramatically in importance. Basically, you don't want to make it deep. As such, adding the country as the first level can be considered a waste, especially when it can be handled by the domain OR subdomain - hence the question :) So, any more thoughts on this? (Maybe SO is the wrong place to ask this question?)

    Read the article

  • How can I make gitosis distinguish between two users with the same username

    - by bryan kennedy
    I have a gitosis system that seems to be working correctly except for a common problem we run into where I can't distingush permissions between two users who have the same username, but different hosts. For example: [email protected] 's SSH key is in the key folder. And so is [email protected] 's SSH is also in the key folder. These two jsmith's are two different people on two different computers. However, when I configure them in the gitosis.conf file with the usernames jsmith@computer or jsmith@machine, it seems like each user just gets the same permission. Can gitosis not distinguish the full username (name and host)? If not, how do I deal with multiple users accessing our system with common usernames? Thanks for any help.

    Read the article

  • Multisession burn in Imgburn

    - by blntechie
    Is Multisession burn available in Imgburn? If not, any idea whether it will be implemented in future? I almost recommended Imgburn instead of Nero or Roxio to one of my friend. He requires multisession burning and I found no options to enable it,if available in Options. Note: Please don't question the question. Like, Why would you want multisession anyway? or Isn't USB stick/RW Disk is what you need instead of a RO CD/DVD? Please keep the answers in context. I know that I can use USB sticks instead of CD/DVD and my friend require mulisession anyway. May be I can ask him to keep Nero as a backup for this purpose if Imgburn don't support this.

    Read the article

  • mod_rewrite rules to run fcgi for different subdomains

    - by Anthony Hiscox
    On my shared hosting server (Hostmonster) I have django (actually pinax) setup so that a .htaccess mod_rewrite rule rewrites the request to a pinax.fcgi file: RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ pinax.fcgi/$1 [QSA,L] What I would like to do is have a different pinax.fcgi file get called depending on the domain used (or subdomain), something like this: RewriteCond %{HTTP_HOST} ^subdomain\.domain\.com$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ pinax2.fcgi/$1 [QSA,L] RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ pinax.fcgi/$1 [QSA,L] This is stored in a .htaccess file in my ROOT public_html folder (not in the public_html/subdomain/ folder), but unfortunately just results in internal redirect errors. How can I write these rules so that they use a different fcgi file for different domains?

    Read the article

  • Windows 7+ desktop apps - what's the best UI toolkit for a new project?

    - by Chris Adams
    I'm trying to make a decision for a new Windows desktop app: what to use for the UI. (This is a desktop app that needs to have compatibility with Windows 7. It won't be distributed on the Windows Store.) This application is going to be cross-platform. I intend on writing the core in C++, and using each platform's native UI toolkit. I feel this is preferable to using a cross-platform toolkit like Qt, as it allows me to keep the native look and feel of each platform. On the Windows side, the UI situation isn't exactly clear. I'm getting the feeling that Microsoft is slowly abandoning .NET, particularly as their preferred toolkit for desktop apps. Indeed, the Getting Started chapter for Windows 7, as well as the rest of Microsoft's documentation, seems to be more suited for C++. I have a few options here: C# with WPF - This sesms like this might be the best Microsoft has to offer for Windows 7 desktop apps, even if it isn't their "preferred" toolkit. I'd need to use P/Invoke to call my C++ code. C++ with Direct2D - This is what Microsoft used in one of their examples. This feels like it's too low-level. Part of the appeal of a higher-level UI toolkit is the consistency with the native look and feel of the platform, so doing this would just feel strange. C++ with a third-party UI toolkit, like Qt There might be some other options I'm missing, which I'd love to hear about. So, if you were starting a new Windows 7+ desktop app today, what would you use?

    Read the article

  • Why are my Photoshop modify selection tools graying out?

    - by Myersguy
    So, I looked into this a little before asking. Basically, I always select the entire canvas, and modify my selection for multiple Photoshop effects. It has never given me problems until just recently. Now, if I select the entire canvas, Photoshop will gray out the modify selection options. People online claimed that you can't select the entire canvas, but that is how I've been doing it for years. So, does anyone know how to stop the selections from graying out? I don't want to have to expand my canvas each time I want to create borders, etc. I need a work around, a method, anything that will stop these options from graying out like they currently are. Thanks.

    Read the article

  • Ubuntu 12.10 ATI Driver 12.11 fails after compiz and xorg update

    - by Lukasz W.
    I updated my system via the package manager from Unity and next restart was just blackness. After being here: http://linux.hootip.com/amd-catalyst-12-11-beta-fix-and-installation-the-drivers-on-ubuntu-12-11/ I had the Catalyst 12.11 Beta driver installed. I checked my /var/log/apt/history.log and the update I received was of compiz and xorg packages. I tried to get latest release info, but all I get from their pages are commit info; I can't tell what was n the package update I got served. Anyone knows what was in the latest xorg/compiz release that broke the driver? Which driver should I use now? For completeness this is how I got the system back to boot (probably lame and not elegant): Boot with GRUB selection "More Ubuntu options (or sth like that), From secondary screen select 3.5.0-19 with boot options, When system prompts on stuff you'd like to do, select "root" - Drop to root shell, There: # mount -o remount,rw / # mv /etc/X11/xorg.conf /etc/X11/xorg.conf.failed # /usr/share/ati/fglrx-uninstall.sh # reboot This got be back on my feet.

    Read the article

  • Patch msp into msi package

    - by Kvad
    The latest update of Windows Live Messenger is an msp added to the package. I want to patch a msp into an msi. Reference download http://wl.dlservice.microsoft.com/download/8/3/D/83D75746-DF04-45E9-8374-BD31B9419128/en/wlsetup-all.exe I extract all msi and msps from this. (To get the msp and msi's I did the following Use resource hacker to open up wlsetup-all.exe In the left hand tree browse to PACKAGE Right click PACKAGE, save PACKAGE resources Save to a new temp folder Eg. D:\temp\package.rc This will output a whole lot of .bin files These are just cab files so we need to do a mass rename “ren *.bin *.cab” Once done select all cab files and extract to a new sub folder \extracted In \extracted you will see all the msi, msp and 7z files you need) I try to apply the msp directly with no result msiexec /p messenger.msp /a messenger.msi I also try doing a admin install with nothing being extracted.

    Read the article

  • Temp profile used when User logs in

    - by TJ
    One of my users logged into his computer, Windows XP, last night only to be meet with an error message that it could not load his profile and he will be logged in using a temporary profile. Typically when this happens I shut the machine down and restart and the correct profile will load when they log in again. Not this time. In the user profile options under computer-properties-Advanced-user profiles it show that there are three profiles with his name. Two are the exact same size with the same modified date (5/5/10) and the other is what I would expect size wise for a new profile with a modified date of today. What are my options to restore his profile?

    Read the article

  • User account shows two Downloads folders

    - by Chris Lieb
    I have my user account on my D drive and junction'd to the C:\users folder. I accidentally moved my profile Downloads folder (C:\users\me\Downloads) and then moved it back to its path on the D drive (C:\me\Downloads). After doing this, the directory tree for my user profile lists two Downloads directories, one located at C:\users\me and one at D:\me. I tried deleting the directory from the D drive, then restoring it from the Recycle Bin to the proper location on the C drive (actually the D drive, accessed through the junction), but it gave me the two Downloads directories again. Is there some way to fix this so that the only listing is for the C:\users\me\Downloads directory, like it was to begin with?

    Read the article

  • Suggest Wireless AP

    - by sunny
    I'm doing a data and voice install for a client in the hotel industry. I'm done with voice and am looking for my options to provide a Wireless AP. The building's dimensions are 100ft X 50ft. There are a ton of options out there which have left me confused now. Please help me decide. I am not clear as to how I should ensure that the Wireless Network is visible throughout the premises. Personally I would love to setup a WDS on 3-4 linksys wrt54gl routers using OpenWRT. Is this advisable? If not please recommend some other AP's. If a more expensive appliance is absolutely necessary, then please suggest something that can be powered using IEEE 802.3af PoE. Thanks

    Read the article

  • How to make Unity 3D work with Bumblebee using the Intel chipset

    - by EboMike
    I have a Sony VAIO S laptop with the dreaded Optimus and finally managed to get Bumblebee to work fully on Ubuntu 12.04 so that I can utilize both the hardware acceleration of the Intel chipset as well as the Nvidia one via optirun and/or bumble-app-settings. However, the desktop effects don't work. But they should, I vaguely remember that they worked for a while before I had Bumblebee installed. This is what I get with the support test: :~$ /usr/lib/nux/unity_support_test -p Xlib: extension "NV-GLX" missing on display ":0". OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile OpenGL version string: 1.4 (2.1 Mesa 8.0.2) Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: no GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no First of all, I kind of doubt that the chipset doesn't support VBOs (essentially a standard feature in GL). Neither Xorg.0.log nor Xorg.8.log show any particular errors. As for the Nvidia drivers: In order to get them to work, I had to install the 304.22 drivers (older ones wouldn't work). They clobbered libglx.so, so I reinstated the xserver-xorg-core libglx.so in its original place, moved Nvidia's libglx.so to an nvidia-specific folder and specified that folder in the bumblebee.config. That seems to work and shouldn't cause the problem I see here. For fun, I tried to use the Nvidia chipset for Unity, but that didn't fly either: ~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 640M LE/PCIe/SSE2 OpenGL version string: 4.2.0 NVIDIA 304.22 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no

    Read the article

  • Owner of uploads directory is `www-data` but this prevents FTP access via PHP scripts

    - by letseatfood
    To allow write access to Apache, I needed to chown www-data:www-data /var/www/mysite/uploads to my site's upload folder. This allows me to delete files from the folder via unlink() in a PHP script. Unfortunately, this prevents another PHP script, which uses FTP functions, from working. I think it is because the FTP user is mike and now that the uploads directory is owned by www-data, mike cannot access it. I added mike to the group www-data, but this does not fix the issue. Can somebody advise me on how to allow PHP FTP functions to work in addition to file deletion using PHP's unlink() function?

    Read the article

  • JPEG images not loading on PlayBook (Marmalade + iwgame)

    - by Vexille
    I'm using iwgame on a test project and I was trying to render different resolutions of JPG and PNG images. Everything works fine on the Marmalade Simulator, however once I deploy the game to our PlayBook and run it, only the PNG images are shown. I have declared the images in the MKB file and on a XML file iwgame's using to load the images. I've checked the deployments folder and all images are present in the intermediatefiles/native folder. We're currently using a BlackBerry only license, so we can only test this on the PlayBook, but we do intend to get a Community license and deploy to iOS and Android devices eventually (I'm not sure if this is a problem exclusive to the PlayBook). I really don't know if this is a Marmalade or a iwgame issue. I have a different test project without iwgame and it simply won't run with jpg images (I get the error: 'Could not find handler for extension "jpg"'). While searching for a sollution, I've seen people talking about using libjpg, but I've also found that Marmalade supposedly has integrated native jpeg support (and because of that iwgame has abandoned their jpeg loading support since v0.340), so I don't know what to think. I'm currently using the most recent versions of both Marmalade and iwgame, I believe: Marmalade 6.1.2 and iwgame 0.400. Also, please let me know if there's an easier or better way to do this, such as linking libjpg or something (I'm not exactly sure how to do this). I really would appreciate some help with this, there's a huge difference in size for the images we're planning to use, from a ~500kb jpg file to a ~3.5mb png file. Thanks, guys.

    Read the article

  • How can I change the color of the pane separator for my Ambiance theme modification?

    - by WarriorIng64
    I am currently messing around with Ambiance, trying to give Nautilus a dark sidebar (because I think it looks much better that way, especially with the current look having the dark-colored breadcrumbs clashing horribly with the light-colored sidebar). I have zero experience and knowledge of how to create GTK+ themes, and I couldn't find any documentation online, so I just made a copy of the folder for Ambiance under /usr/share/themes, renamed it "Ambiance Dark Sidebar" and just started messing with color values. As shown below, I found the value in nautilus.css needed to be tweaked to create the dark sidebar, but there is still one part that stubbornly stays light gray. This is the pane separator, and I want to change it so it matches the rest better (marked in red). Does anyone know what I need to do to change the color of this part so it matches the rest of the sidebar better? I already know from seeing themes like Adwaita Dark that this should be possible, but even after poking around in that I didn't find anything that seemed to help. Here are the contents of the files I modified in the theme folder Ambiance Dark Sidebar, stored alongside Ambiance in /usr/share/themes: index.theme gtk-3.0/apps/nautilus.css

    Read the article

  • how to connect with ssh when it's hanging

    - by Eduardo Bezerra
    I'm trying to ssh into a remote machine, but it hangs when looking for an identity file: [username@local .ssh]$ ssh -v remote uname OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /home/username/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to remote [192.168.3.36] port 22. debug1: Connection established. debug1: identity file /home/username/.ssh/identity type -1 debug1: identity file /home/username/.ssh/id_rsa type -1 debug1: identity file /home/username/.ssh/id_dsa type -1 I can ping the machine normally, and it's obviously working with the sshd service running... I just don't know how to log into it. In fact, I'd just like to reboot it. That'd be fine. Thing is: it's across the ocean (I'm in the US, and the machine is in Europe). I'd run some hundreds of java threads at the same time and apparently that was too much for the host. How can I get back in?

    Read the article

  • Ubuntu won't suspend anymore, but it did upon install.

    - by Bruce Connor
    I fresh installed Ubuntu 10.10 back when it came out, and my laptop was suspending fine. All of a sudden, I can't get my laptop to suspend anymore. It's an HP Pavilion dv2-1110, but I don't think it's a hardware issue, here's why: It suspended fine upon first install. I haven't installed any new kernels since then, but I have installed tons of packages, so it's probably a package. The suspend and hibernate options disappeared from the shutdown menu. If I press my keyboard's suspend button (or if I close the lid) I get the following message: If I try the command pmi action suspend, I get the error message: Error org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Hal was not provided by any .service files. If I try the command echo -n mem > sudo /sys/power/state I get absolutely no output and no visible effect. What might be causing this behavior? I thought a list of installed packages might be useful, but it's huge and I don't know how to post it here in collapse/expand mode or something. EDIT:Just in case someone asks, none of the installed packages are kdm or anything like that (which would justify the lack of options in gnome's shutdown menu).

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >