Search Results

Search found 64551 results on 2583 pages for 'how i work'.

Page 143/2583 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • What would be the best way to get Apple to donate their JVM-work to OpenJDK?

    - by Thorbjørn Ravn Andersen
    It has been announced that Apple deprecates their JVM. It is a really nice piece of work giving an excellent user experience for Swing application on OS X, and it would be a pity if it just went away. As I see it the only realistic long term alternative to Apples own JVM is the OpenJDK unless Oracle chooses to take over the Apple JVM which I doubt as OS X is not a core platform for Oracle. But for this to work Apple needs to donate their enhancements to OpenJDK, and it needs to be under the GPL. They did so already with WebKit so there is precedent. What would be the best way to make them do so? Make a stackexchange poll? Get James Gosling and other high profile Java persons to say so? Email Steve Jobs? Suggestions? EDIT: Well, Apple has now promised to do so :) Shows that asking on StackExchange really MAKES A DIFFERENCE! Great!

    Read the article

  • Possible for one developer to work on a site thats on another developer's server?

    - by cire4
    Sorry for the confusing title. Let me explain: I am currently trying to get a site developed. My current developer has taken the site about as far as I think they are capable of and I am planning on hiring another developer to put the finishing touches on it, debug it and upgrade some of the more technical details. The site is hosted on my current developer's server. They are scheduled to work on it until mid-April, at which point they will transfer the site to my server. I would like the new developer to get started on the upgrades to the site as soon as possible. So my question is this: Is it possible for the new developer to start working on upgrades to the site while it is still on the old developer's server (and without the old developer knowing about it)? Would the new developer have to create a mirror site and work on it that way? I'm having trouble imagining if this is possible so any advice you can offer would be much appreciated!

    Read the article

  • 127.0.0.1:9051 doesnt work after apache, mysql, php installation?

    - by Rana Muhammad Waqas
    I have installed apache2, mysql, and php and now it doesnt let Vidalia run on localhost. i tried to change the TCP connection (controlport) to any other ip 192.168.0.40 and tried to change the default port 9051 to any other but that doesnt work. I thought apache is running so i used this command sudo service apache2 stop but that still doesnt work. So now when i type 127.0.0.1:9051 in browser it says and if i type only type 127.0.0.1 after stopping the apapche2 service with the command mentioned above it says unable to connect I am not sure what to do now Help!

    Read the article

  • how do I get dual monitors to work properly in Ubuntu 11.10 on a Dell Latitude D630?

    - by wes cook
    I have spent a lot of time trying to get dual monitors to work on Ubuntu 11.10 on my Dell Latitude D630 (nVidia NVS 135m video card). - For starters, the System Displays settings app always only showed one unknown monitor, even though I had the external Acer monitor connected. - So I downloaded and installed the nVidia drivers. According to what I read I would need to only use the nVidia driver app (nVidia X Server Settings), so that's what I've done. (System Displays settings continued to only show a single monitor anyway). - nVidia settings app only showed on monitor until I changed the BIOS setting to use the onboard video for external monitor (not the dock video, which it was set to, even though I don't have a docking station). - The nVidia setting app now recognized both monitors. So, I setup the X Server display config as Separate X screen for both monitors. My laptop screen shows up as AUO 1440x900 and my external monitor as Acer E211H 1920x1080. - Everything seemed like it would work, but the external monitor was just a complete white screen. The external monitor was non-functional, even though sometimes it would show the background image - still nothing would show up over there. - So, I checked the Enable Xinerama box. - Now, after logging out and back in, the wallpaper extends to both screens but I get no taskbar at the bottom or top, no system menus, and I have to press the power button to restart or log off. - After experimenting with all the shells, the only one that shows the menus and taskbars when I log in is Gnome Classic. - This is pretty much the same symptoms as found here: How do I fix 11.10 GUI?. - So, I resign myself to the older shell. - Everything works fine until ... I unplug the external monitor ... this is a laptop after all. - Anyway, after doing some work on the road, I plug back in and I still see both screens and it's functional except, ... - Now, the laptop screen (with the taskbar and menu bar) has 4 black bars at the top that windows cannot cover. The top bar is the menu bar (with Applications, Places, the date and time and the system menu on the right). But the next 3 bars (the same height as the top menu bar) are empty and are just reducing the max size of windows on that screen. - See screenshot here: http://i39.tinypic.com/35d2kh1.png - So ... 1. How do I get rid of those extra 3 black bars? They're taking valuable screen space. 2. (less critical) How do I successfully use both screens in the Ubuntu or Ubuntu 2D shell?

    Read the article

  • I studied electrical engineering. Can I work as a developer? [closed]

    - by FailedDev
    A while ago I finished my Msc in Electrical Engineering and started working as an engineering consultant where I mostly do development work. I am good at picking up languages/technologies tools. I have fiddled with C/C++/C#/perl/ant/bash/html/css etc. Although I have never had a complain for my work, rather the contrary, I just feel that some day, someone will ask me a real hard task which would maybe seem rather trivial for a computer scientist but hard for me. Should I read/do something to become a better developer. Should I pick up a book about design patterns or algorithms for example? Is this normal that I have this kind of "fear"? Sorry if this is the wrong place to post this question. Please notify me so I can close it if this is the case.

    Read the article

  • Why does pasting sometimes not work in gnome-terminal?

    - by Matthew
    Ctrl + Shift + C and Ctrl + Shift + V are supposed to replace the normal Ctrl + C and Ctrl + V in gnome-terminal. Sometimes they work, but usually they have no effect. What are some potential reasons for this? I'm not sure what other information to give. Edit: It seems that manually selecting Paste from the Edit menu does not work either. Right click > Paste works, but Edit > Paste does not. Copying works, but pasting does not. Also, I have vi-mode enabled (set -o vi in my ~/.bashrc). Could this have something to do with it? Edit: Here is a video demonstrating the problem. I used Screenkey (in "raw" mode, to catch "shift") to show what keys I am pressing.

    Read the article

  • How can I get a USB floppy drive to work?

    - by jfmessier
    I have a Toshiba USB floppy drive that I need to use under Ubuntu. When I connect it, and insert a floppy disk in it, I do not see anything mounted under Ubuntu 10.10. I was suspecting the hardware and/or the floppy disk to be defective, and so I tested the floppy disk as well as the floppy drive itself under Windows XP, and everything was just fine. I was able to find the following instructions: Add the following line to the /etc/modules file: floppy Enter the following shell commands: mkdir /media/floppy mount -t vfat /dev/sdc /media/floppy -o uid=1000 This will mount the floppy, but I would like this to happen automatically, so when I connect the drive to the USB port, it automatically mounts the floppy. How can I make this work? Or does Ubuntu only work with internal Floppy drives?

    Read the article

  • How well does Intel 3000 HD work on Ubuntu?

    - by Simon
    Right now i have notebook with Nvidia 8400M GS (I know, it's not good card) and it's impossible to work normally when i'll plugin external monitor (1920x1080). Windows 7 can deal with it without problems (1440x900 on notebook + 1920x1080 external). On Ubuntu i have to choose one screen and turn off the second one. Even with only one screen Ubuntu (Unity or even Gnome3) sometimes hangs for a while, I've not found solution for this yet, but nevermind, it's probably because of my card or/and nvidia's drivers. I'm going to buy new PC, but for now only with integrated Intel 3000HD, and my question is: Should i expect similar problems with this card? Here i've found link to Intel's webpage about drivers - "only community develop them", and i'm a bit concerned. I'll use then only one monitor (the bigger one), but how well does those driver work? Are there any performance tests?

    Read the article

  • Sync ERROR!! LOST MY WORK!

    - by Pedro Pisandelli
    Sorry my English... hope you understand me... I had a document. Edited it yesterday. But today, when i open it, it was desync. It show an one month earlier version!! I LOST A LOT OF WORK!! And i can't recover my right version of the document! I have a paid plan for Ubuntu One, but this made me very angry. And i don't see a way to recover and don't see a way to talk to somebody! There's no recovery mode like Dropbox... Man, i'm really ANGRY!! REALLY! I'll not recomend Ubuntu One services anymore! I don't know what to do... I lost my work and now i'm one month late! Thanks!!!

    Read the article

  • Why does the sudo command not work in chroot?

    - by katarina
    I just installed a 32-bit chroot to run on my 64-bit system. In the chroot environment, the sudo command doesn't work, it says sudo: command not found Also, when I try the su root command, my password doesn't work (su: authentication failure). What password do they want? I'm quite new to Ubuntu, so actually I don't really know what I'm doing. I am just trying to follow instructions. I solved this particular problem simply by starting the chroot by the command: katarina@ubuntu:~$ schroot -c oneiric_i386 -u root instead of the one I used the first time: katarina@ubuntu:~$ schroot -a I still have some other problems, but I guess that's not for this question.

    Read the article

  • How can I get my wireless webcam to work?

    - by hellocatfood
    I recently bought this wireless webcam. I'm having trouble getting it to work on Ubuntu 11.04. I ran lsusb and got the folowing information about the device Bus 006 Device 003: ID 0416:a91a Winbond Electronics Corp. I did a Google serach for teh device ID and this website informs me that it matches the LogiLink Wireless Webcam (so Maplin probably just rebranded this!). What this website states is that this device should work, which it doesn't. The problem I'm facing is that I don't get any actual video being streamed or shown. The built in microphone works and, when running Cheese, when I press the camera button on the webcam itself the software recognises that the button is pressed. On that note, when running cheese from the terminal with this webcam attached I get the following error libv4l2: error getting pixformat: Invalid argument libv4l2: error setting pixformat: Input/output error Any help is appreciated

    Read the article

  • How to get KeePass to properly work with Chromium?

    - by Tom
    The two-channel auto-type obfuscation feature of KeePass doesn't work for me with Chromium (on Ubuntu 12.04 64 bits). However, it works just fine with Firefox. Dows anyone know how to fix this? Textboxes in web forms in Chromium seems to have something special that causes this feature to fail. Only some of the username/password characters are being auto-typed. This might be related to this: if I select an entry in KeePass and click "Copy User Name", I can paste it fine with Ctrl+V in any textbox in Firefox, but I can't on Chromium. However, text copied using Ctrl+C from a regular text file (say, from gedit), can be pasted fine on both browsers. What may be wrong? I wouldn't like to deactive this feature for all the entries in my keepass files as I use them on Windows too and they work just fine there (even on Google Chrome for Windows). This feature gives an appreciated extra security measure against spyware/keyloggers.

    Read the article

  • How to make the apt autocompletion work in minimal system (in LXC container)?

    - by Adam Ryczkowski
    When I work inside thin LXC container on 12.04 I have only very basic system. In particular the /etc/bash_completion.d is missing the e.g. apt, that I find particularly useful. Is there any standard package, that installs the autocompletion for the apt, or should I copy the file manually? And just copying the files into /etc/bash_completion.d manually just doesn't seem to work. I use bash as my command interpreter. What am I missing here?

    Read the article

  • Ubuntu server 12.04 on AWS - How does the passwordless sudo work for the ubuntu user?

    - by aychedee
    I'm using Ubuntu server 12.04 on Amazon. I want to add a new user that has the same behaviour as the default ubuntu user. Specifically I want passwordless sudo for this new user. So I've added a new user and went to edit /etc/sudoers (using visudo of course). From reading that file it seemed like the default ubuntu user was getting it's passwordless sudo from being a member of the admin group. So I added my new user to that. Which didn't work. Then I tried adding the NOPASSWD directive to sudoers. Which also didn't work. Anyway, now I'm just curious. How does the ubuntu user get passwordless privileges if they aren't defined in /etc/sudoers. What is the mechanism that allows this?

    Read the article

  • Why do control keys (ctrl, shift, alt) not work sometimes?

    - by EricSchaefer
    Sometimes (about once a week) the control keys (ctrl, shift, alt) so not work anymore. They do work when I boot up, but after a while they stop working. Logging out and in again repairs it. What can cause something like that? Edit: It just happened again. Incidentally it happened right after vmware-player crashed just as the last time. Coincidence? Edit: It is a laptop keyboard (HP EliteBook 8730w) with german layout.

    Read the article

  • Jumping around to work on different features when you get stuck, is it a source of project failures?

    - by codecompleting
    On personal projects (or work), if one gets stuck on a problem, or waiting to figure out a solution to the problem, if you jump to another section of your code, don't you think it will be a good reason your application will be buggy or worse yet never get completed? Assuming you are not using git and code each feature to a specific branch, things can get out of hand since you have 3 different features you are working on, and you have unresolved issues in each. So when you get done to work, you get stressed out because you have these hanging issues and half-baked code lingering about. What's the best way to avoid this problem? (if you have it) I'm guessing using something like git and creating a branch per feature is the safest way to avoid this bad habit. Any other suggestions?

    Read the article

  • Our work name lately transformed to Revenue from Customer Support... Support? [on hold]

    - by Hollis Nieves
    I have been employeed from the same company (mobile phone business) for quite some time today like a Customer Support Represenative. I also have usually completed nicely until recently and loved the task. Into a Person Support/Revenue middleapproximately we've been converted by May and we've to purchase Television service to clients who call-in about their mobile phone service... They need us to become really manipulative with it. We havent had instruction on it. and our supervisors actually dont understand something about any of it possibly but need us to "Purchase! Purchase! Purchase!" ugggh. At I will become at my work any guidance? Revenue 've never be completed by me and I truly worry losing my work due to my performance. Revenue makes me uneasy... Any feedback could be appreciated

    Read the article

  • Found a better solution to a problem at work - should I deter from posting the code snippet online?

    - by Calmarius
    I think most of us, programmers, used Stack Overflow to solve every day problems: looked for an efficient algorithm to do something. Now imagine a situation: you have a problem to solve. Googled a bit, found a StackOverflow question but you are not really satisfied with the answers so far. So you have to do your own research: you need to do it because you want it in the company's app. Eventually after some hours you have found the better solution. You're happy, you added it to the company's code base, then you want to submit your answer with a code snippet (just several lines) to the question you've found before to help others too. But wait: the company's software is closed source, and you worked on it on the clock. So does this mean I shouldn't post the answer neither at work nor at home to that question in the rest of my life, because I solved it at work, and the company owns that piece of code?

    Read the article

  • Multiple cable adapter setup not working - VGA to smartphone. All cables tested and work

    - by Christopher Rucinski
    Issue Pictured overhead projector setup does not work. #1 - #2 - #3 - Phone. All cables are tested and work! The issue is the HDMI connection between cable #2 and #3. With all other cables, the screen will automatically be displayed onto the projector screen. No extra work needed. With the pictured setup, the smartphone screen is not displayed onto the projector screen. What is the issue with the HDMI connection?? Background We recently had to do presentations at work (school), but the administration only provided VGA means of hooking up to it. Mostly likely reason probably dealt with cost. Anyways, there are several teachers that have brand new Samsung Series 9 ultrabooks (or similar). You know, the ones without VGA support. So I bought an adapter for those ultrabooks. Cable #5 in the picture below. However, both my coworker and I have been wanting to just display our phone screens on the projector. This I knew would require some extra work. What I have VGA cable to projector (cables go through the wall) For laptops HDMI to VGA cable For laptops MHL adapter For 11-pin microUSB phones microHDMI to VGA cable For ultrabooks 11-pin to 5-pin microUSB adapter For older 5-pin microUSB phones) Equipment Projectors 1 projector with VGA and HDMI input (issue is coworkers forget to switch sources) 1 projector with VGA only input Laptops 2 new Samsung ultrabooks w/o VGA or HDMI support 1 ultrabook with VGA and HDMI support several other laptops with at least VGA support 1 tablet with 11-pin microUSB at least 1 new phone with 11-pin microUSB at least 1 old phone with 5-pin microUSB Tested VGA cable (#1) to laptop Good VGA cable (#1) to HDMI adapter (#2) to laptop Good VGA cable (#1) to microHDMI adapter (#5) to laptop Good Projector to HDMI cable (not shown) to MHL adapter (#3) to Galaxy Note 3 smartphone Good VGA cable (#1) to HDMI adapter (#2) to MHL adapter (#3) to Galaxy Note 3 smartphone Does not work!! Extra Notes The 11-pin MHL adapter will not fit inside the 11-pin to 5-pin microUSB adapter so older phones can be displayed on the screen.

    Read the article

  • Bash completion doesn't work, or is ignoring what I've typed; but works for commands

    - by Neil Traft
    Bash completion seems to be ignoring what I've typed (it tries to complete, but acts as if there's nothing under the cursor). I know I saw it work on this machine earlier today, but I'm not sure what has changed. Some examples: cd shows all directories under my current folder: $ cd co<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ Commands like ls and less show all files and directories under my current folder: $ ls co<tab><tab> cmake/ config/ .cproject Doxyfile.in include/ programs/ README.txt src/ tests/ CMakeLists.txt COPYING.txt doc/ examples/ mainpage.dox .project sandbox/ .svn/ Even when I try to complete things from a different folder, it gives me only the results for my current folder (telling me that it is completely ignoring what I've typed): $ cd ~/D<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ But it seems to be working fine for commands and variables: $ if<tab><tab> if ifconfig ifdown ifnames ifquery ifup $ echo $P<tab><tab> $PATH $PIPESTATUS $PPID $PS1 $PS2 $PS4 $PWD $PYTHONPATH I do have this bit in my .bashrc, and I have confirmed that my .bashrc is indeed getting sourced: if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi I've even tried manually executing that file, but it doesn't fix the problem: $ . /etc/bash_completion There was even one point in time where it was working for ls, but was not working for cd ... but I can't replicate that result now. Update: I also just discovered that I have terminals open from earlier that still work. I ran source .bashrc in one of them and afterwards completion was broken. Here is my .bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # # Modified by Neil Traft #source ~/.profile # Allow globs to expand hidden files shopt -s dotglob nullglob # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines or lines starting with space in the history. # See bash(1) for more options HISTCONTROL=ignoreboth # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # If set, the pattern "**" used in a pathname expansion context will # match all files and zero or more directories and subdirectories. #shopt -s globstar # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # Color the prompt export PS1="\[$(tput setaf 2)\]\u@\h:\[$(tput setaf 5)\]\W\[$(tput setaf 2)\] $\[$(tput sgr0)\] " # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi

    Read the article

  • Make it simple. Make it work.

    - by Sean Feldman
    In 2010 I had an experience to work for a business that had lots of challenges. One of those challenges was luck of technical architecture and business value recognition which translated in spending enormous amount of manpower and money on creating C++ solutions for desktop client w/o using .NET to minimize “footprint” (2#) of the client application in deployment environments. This was an awkward experience, considering that C++ custom code was created from scratch to make clients talk to .NET backend while simple having .NET as a dependency would cut time to market by at least 50% (and I’m downplaying the estimate). Regardless, recent Microsoft announcement about .NET vNext has reminded me that experience and how short sighted architecture at that company was. Investment made into making C++ client that cannot be maintained internally by team due to it’s specialization in .NET have created a situation where code to maintain will be more brutal over the time and  number of developers understanding it will be going and shrinking. Not only that. The ability to go cross-platform (#3) and performance achievement gained with native compilation (#1) would be an immediate pay back. Why am I saying all this? To make a simple point to myself and remind again – when working on a product that needs to get to the market, make it simple, make it work, and then see how technology is changing and how you can adopt. Simplicity will not let you down. But a complex solution will always do.

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • How can i get my KVM switch to work? (win7 & ubuntu 10.10)

    - by Will W.
    i bought a KVM switch and i'm trying to use it to have it connected to my main PC (win7) and my new machine i just installed ubuntu on. I hooked it up properly, and tried using it. It worked when switching from the win7 machine to the ubuntu one, but after the (1st and only) successful switch, ubuntu just didn't seem to recognize my mouse or keyboard. Basically when i tried it the easiest was to explain what happened was it only worked with Win7. When i switched over to ubuntu by doing a [scroll-lock] [scroll-lock], my keyboard and mouse were not recognized. However, the lights on the keyboard and mouse did work when on ubuntu, but they didn't function, and since keyboard wouldn't function, i couldn't do a [scroll-lock] [scroll-lock] to switch back to the win7 machine. So i was basically locked in to ubuntu with no mouse or keyboard, and i had to unplug the keyboard/mouse usb's and d-sub to plug the monitor d-sub back into win7 computer to type up this thread and google the issue. Seems some people have had this issue before but i couldn't find a fix... I am 80% sure it has to do with drivers... but there isn't any for KVM switches, at least not this one also i never was unable to find ubuntu drivers/firmware for my mouse and keyboard (Logitech G15 and Razer Deathadder 3500). I don't know how to fix this, perhaps someone super-savvy could write/code a script or work-around or something? I really need to get this thing working, my back is getting sore from bending over and plugging in / unplugging usb/monitor/usb/monitor/usb/usb over and over again lol... and i really would be sad if the constant plugging unplugging of the usb's or the d-sub port would over time damage the ports... i don't want that... There has to be some way to get this working.. Can anyone help? The KVM is a IOGEAR GCS632U Win7 x64 Ubuntu 10.10

    Read the article

  • Why won't my graphics work in Ubuntu 12.04 LTS?

    - by user170974
    I'm very new to Ubuntu and to Linux in general, and took the leap and formatted my PC to Ubuntu 12.04 LTS very recently :) I seem to be having some trouble getting my graphics card to run properly, I looked over what information I could find but I still cannot get it up and running and figured this was a good place to ask for help. The information I can find on my graphics is as follows: (Terminal command) lspci outputs: 01:05.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS880M [Mobility Radeon HD 4225/4250] 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Madison [Mobility Radeon HD 5650/5750 / 6530M/6550M] I tried using a mixture of the following links: How do I fix my installation of ATI Catalyst Video Driver in 12.04 LTS? What is the correct way to install ATI Catalyst Video Drivers (fglrx)? Ubuntu Precise Installation Guide But it does not seem to work, since running fglrxinfo in terminal gives: display: :0.0 screen: 0 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x301) OpenGL version string: 1.4 (2.1 Mesa 9.0.3) What am I doing wrong here? All help appreciated ;) Edit: I have tried the legacy driver from www2.ati.com/drivers/linux/amd-driver-installer-catalyst-13-4-linux-x86.x86_64.zip I also tried the guide at https://launchpad.net/~makson96/+archive/fglrx which caused the system to crash (blackscreen, no boot) Neither seemed to work. I did however reinstall ubuntu 12.04 LTS, and re-tried both with no success. Reintalling ubuntu did however fix the broken dependencies problems, etc.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >