Search Results

Search found 19299 results on 772 pages for 'off topic'.

Page 118/772 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Pentium 4 Computer Won't Power Up

    - by Harvey
    I have a faulty Pentium 4 workstation with data that I would like to retrieve. Here are the symptoms & what I've done so far: Machine is totally dead. Motherboard LED is lit but that is the only sign of life. I have replaced the power supply and bypassed the on/off switch. Tried a PC Analyzer motherboard tester but didn't have any power to the card. Unpluggged the P4 cable from the motherboard, hit the on/off switch, the power supply fan comes on and I get codes from the analyzer but nothing that seems to be of any value. Machine does not boot. Will not shut down by hitting the switch. Bad motherboard or could it be a bad CPU cooling fan?

    Read the article

  • Have to reset wireless frequently on iPad / Macs connected to Cisco WRV200

    - by retailevolved
    I have a Cisco WRV200 set up in our small office. The wireless signal works great for PCs, but I have found that Macs and iPads have to frequently reset the connection. It will be working for about 10 minutes, then stop. To fix it, I just turn off Airport and turn it on again. On an iPad, I have to go into settings, shut the wireless off, and then turn it on again. The router is broadcasting mixed B/G on channel 6. Security is WPA-PSK2. Anybody out there had similar issues with their router? How can I fix it?

    Read the article

  • Can I fork a copy command on ReadyNAS SSH?

    - by DanyW
    I have a ReadyNAS 102 with a couple of USB drives attached. There were times I wanted to copy files between volumes. Unfortunately I have also accidentally cut off copying process by accidentally closing off the SSH sessions. Is it possible for me to fork a cp or mv process on SSH? As it currently stands when I close the SSH session, be it by accidentally closing the terminal window or closing my laptop screen and putting it to sleep, the copy process stops. Can I do something like cp ~/blah /some/other/path & and have the process keep running to completion in the background even if the SSH session is terminated?

    Read the article

  • some keyboard keys not working properly

    - by surfmadpig
    I'm using Windows 7. All of a sudden, a couple of hours ago, this happened: My keyboard number keys [above the letters] stopped working properly, both as numbers and as symbols. Only 5 and 6 are functional. Also, I've noticed that the End key isn't working either, and perhaps a couple more from that group. I'm pretty sure it has something to do with those evil Sticky Keys/Filter Keys/ whatever those ease of access things are, BUT I've turned off all the ease of access keyboard options and nothing has changed. Is it possible that something is still turned on while I unchecked it? Are the on/off checkboxes to control WHEN it happens or IF it happens? I also tried rebooting and uninstalling/reinstalling keyboard from device manager, to no avail. It's certainly a software issue and not a hardware issue, as I've tried another keyboard and the problem persists. And, predictably enough, it's annoying. Any ideas?

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • Default Keyboard for new users in Windows 7

    - by xited
    I just installed Windows 7 and I want all users signing in to the computer to see the Language Bar customized with the following three languages: "English (American)" "French (Standard)" "Chinese (Simplified PRC)" I am running the following four lines of code at log on in order to change the registry such that each user will see the language bar, and then have access to the three keyboard layouts mentioned above. reg add "HKCU\Software\Microsoft\CTF\LangBar" /v ShowStatus /t REG_DWORD /d 4 /f reg add "HKCU\Keyboard Layout\Preload" /v 2 /d 0000040c reg add "HKCU\Keyboard Layout\Preload" /v 3 /d 00000c0a reg add "HKCU\Keyboard Layout\Preload" /v 4 /d 00000804 The above works fine, but with one small/major inconvenience: the user has to log off and then log back on in order for these changes to take effect and see the language bar, as described above. The question becomes: How can I force these changes to take effect so that users don't have to log off and then log back in to see the language bar. This has to be done automatically when users log in.

    Read the article

  • Install Ubuntu in UEFI mode (unable to boot from USB)

    - by Adele
    I recently bought a Dell Inspiron 15R SE with Windows 8 (64 bit) pre-installed (UEFI supported). I want to install Ubuntu in dual boot with Windows 8. I tried to follow all instruction here : https://help.ubuntu.com/community/UEFI And here : Installing Ubuntu on a Pre-Installed Windows 8 (64-bit) System (UEFI Supported) So, I set Secure Boot to "off" into BIOS and I disable Fast Startup as described here : http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html I created a bootable USB key for Ubuntu (Ubuntu 13.10 64bits international Edition) with Unetbootin. The problem is I am unable to boot from the USB key. The computer tries to boot into infinite loop. I also tried to boot from USB with "Legacy Boot" option instead of UEFI. In this case, the computer says there are no bootable devices. Of course, I tried to boot from my USB key on an other computer having normal BIOS and it works perfectly. Have you ideas about what I need to do to be able to boot from USB ? Thanks in advance for your help, Adele

    Read the article

  • Reverse proxy for a subdirectory in nginx

    - by Maple
    I want to set up a Reverse proxy on my VPS for my Heroku app (http://lovemaple.heroku.com) So if I visit mysite.com/blog I can get the content in http://lovemaple.heroku.com I followed the instructions on the Apache wiki. location /couchdb { rewrite /couchdb/(.*) /$1 break; proxy_pass http://localhost:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I changed it to fit my situation: location /blog { rewrite /blog/(.*) /$1 break; proxy_pass http://lovemaple.heroku.com; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } When I visit mysite.com/blog, the page show up, but js/css file cannot be gotten (404). Their link becomes mysite.com/style.css but not mysite.com/blog/style.css. What's wrong and how can I fix it?

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • Second Monitor Detected, but not receiving a signal after upgrading to 12.04

    - by user62458
    After I upgraded to 12.04, my second monitor is detected (in display settings), but will not power on. I have scoured the Internet and forums for a solution and I can't find anything. I have found a couple people with the same problem, but never a solution for it. I am no expert, but I'm certainly not a noob. My computer uses AMD Radeon 6250 graphics, but I do NOT want to use the proprietary graphics drivers. They refuse to work properly with my second monitor (they ATI drivers will only mirror screens, and I've done everything to try to fix it, and I DON't want mirrored screens) Not to mention that the default open-source video drivers seem to work much better than the proprietary anyway! Again, Ubuntu's default video drivers work fine, and they even DETECT the second monitor (Dell 19'). I can drag stuff off the screen and put it on the 'space' of the second monitor and even a screen-shot shows that there are two monitors active; but the monitor is OFF. It will not power on. It goes into 'power-save' mode because it is not receiving a signal. For some reason it is not getting the signal to power on, even though Ubuntu thinks the monitor is working properly. I had this working fine on my Sony VAIO yesterday (with Radeon graphics/default Ubuntu video drivers). I upgraded to a Samsung Series 3 and now I have this issue. I can't for the life of me figure out why the monitor is connected, detected and I have screen space for the monitor, but the screen won't turn on! XRANDR Output: Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 VGA-0 connected (normal left inverted right x axis y axis) 1440x900 59.9 + 75.0 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 70.1 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 72.8 75.0 66.7 60.0 720x400 70.1 LVDS connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.1*+ 1280x720 59.9 1152x768 59.8 1024x768 59.9 800x600 59.9 848x480 59.7 720x480 59.7 640x480 59.4 HDMI-0 disconnected (normal left inverted right x axis y axis)

    Read the article

  • Where is he now?

    - by Chris G. Williams
    A couple months ago, I announced I was leaving Magenic in order to take a break from consulting. I figured I'd post an update as to what I'm doing now, since I haven't exactly been slacking off.1) I accepted a position as a Lead Developer with RealPage. I work on a number of internal use applications for a subsidiary known as LevelOne. The majority of my work is in ASP.NET, a surprising amount of VB.NET, some C# and I'm picking up a few new tools for my belt... specifically Python, MongoDB and Perl.2) I am still the owner of Big Robot Games, a retail game store / coffee shop in the South Carolina upstate region. I'm not as involved in the day to day activity as I was, but I'm there most nights and weekends, when I'm not off doing other things, like #3.3) I am on the staff of Rock Revolt Magazine as a journalist, covering live performances as well as interviewing bands, providing album & video game reviews, fixing the website and the occasional prison ink. (Just kidding on that last one.)4) In whatever time is leftover, I still manage to bang out a little code on Heroic Adventure! (aka HA!) and talk about Windows Phone, XNA and whatever else suits me, wherever they'll let me.I guess that's about it.

    Read the article

  • Block a machine from accessing the internet

    - by Simon Rigby
    After some confirmation that I have thinking right in this scenario. We have a number of wired and wireless machines which presently have direct internet access. I also have a Linux (Ubuntu) server which is used as a file server for the network. Essentially I would like to be able to turn internet access on and off for machines. My plan is to block these machines by MAC address at the router. I would then set up a proxy server on the Linux box (ie Squid) so that the machines I wish to restrict can access the internet via the proxy. As I can adjust access via ACLs in squid, I would be able to switch on or off a machines access to the internet without having to further adjust the router's MAC rules. And of course I could go further and create a few scripts to assist with this admin task. Does this seem sound and have I over looked anything? Any help greatly appreciated. Simon.

    Read the article

  • Yahoo toolbar and local sites (e.g. Intranet)

    - by Klaptrap
    We have local sites running on IIS in regular MS Windows network. User base has IE, FireFox and Chrome. Local sites are isolated by host headers and DNS record created for the common IP accordingly. This is a regular set-up. Users without Yahoo Toolbar type http://intranet and the sites resolves. Users with Yahoo toolbar type http://intranet and the toolbar goes off to search for this site in public domain. This is irrespective to whether the address is typed into the browser address bar or the toolbar. All versions of toolbar and IE are affected. I cannot see a setting on the toolbar to switch this "irritating" behaviour off and simply un-installing the toolbar is not an option. Any ideas?

    Read the article

  • Why can't I boot from portable HD?

    - by user11239
    I've been trying to get Ubuntu 10.04-LTS 32-bit desktop installed onto a 250GB FreeAgent Go drive from Seagate. I've been able to install onto a USB flash drive and boot successfully from this. I have installed Ubuntu onto the jump drive using Universal USB Installer, and this was a total success in terms of getting Ubuntu to run off a flash drive. I was unable to accomplish this with the portable HDD. I then, following instructions, attempted to install the OS onto the HDD once booted up from the flash drive. After installing the OS on the HDD, the computer would simply not load the OS when the HDD medium was selected for booting from. However, as there is no System-> Preferences-> Removable Drives and Media I could not complete this step. Is this vital? How do I do this under Ubuntu 10.04? I have formmated the MBR on the HDD and repeated the above, still with no success. I have also browsed some forums that mention there may be something related to spin-up speeds, but nothing explained in detail the issue or how to solve it, and I'm not familiar enough with system booting to understand if this could be an issue. Basically, what I'm trying to do is get Ubuntu to boot off the HDD, I've attempted several things, and the result is, after selecting the HDD from BIOS, the OS never starts booting (after waiting upwards of ten minutes). I just have a white cursor blinking. I can always get it to boot from the jump drive. Related question

    Read the article

  • Ubuntu on Samsung NP700Z5B - no Grub

    - by copolii
    I just bought a Samsung NP700Z5B laptop. Gorgeous machine and great performance! I do 2 things when I get a new laptop: Format the HD and install Winblows from a CD to ditch the bloatware Install some variant of Linux on it (lately Ubuntu) Step 1 worked fine (until earlier today), but I haven't been able to install Ubuntu on it for the past 3 days! I've tried Mint12, Ubuntu 12.04, Ubuntu 11.10, Ubuntu 11.04 and Ubuntu 10.04. The live CD and the installations all run fine and report no problems, but when I reboot grub is nowhere to be found! The system goes directly to Winblows! I've tried booting from the liveCD and re-installing grub via the chroot and purge & reinstall methods (https://help.ubuntu.com/community/Grub2) and neither makes a difference. I've also tried copying the boot sector: dd if=/dev/sda of=linux.bin bs=512 count=1 and putting it on c: then setting bcdedit to add the entry to the Windows bootloader with no results. Earlier today I decided to try and set my boot partition as an EFI boot partition ... bad choice, now I don't even have the Winblows boot loader. I've officially ran out of ideas. Tried calling Samsung, but they're closed (they'd probably say something stupid along the lines of "Samsung recommends Windows 7" ... I've had Dell say that to me). Any help would be greatly appreciated. Update 1 Tried re-installing 12.04 and now I get the screen continously turning off and back on, but still no sign of booting ... it has been doing it for 15 mins so far (I set the boot partition type to ext2 instead of ext4) Update 2 Well ... this just gets better and better. I inserted the installation USB key to reboot it and the flickering stopped for about a minute (remained on) then it started turning off and on again

    Read the article

  • Anyone have any experience with bargain laptop batteries?

    - by chris
    I've got an oldish D820 that's got a 100% dead battery. I know that I could, in theory, take it apart and replace bad cells in the battery. I'm not really comfortable with doing that. I also know that there are various places that sell replacement batteries for 20% to 80% of the cost that Dell would charge. Does anyone have any experiences with buying more than a couple of these off-brand batteries? If a battery goes boom, it could be really ugly, so I'd rather not risk it, but at the same time, the dell batteries are really expensive... Any opinions on these ebay / off-brand battery vendors? Thanks!

    Read the article

  • Preventing Windows from automatically removing broken desktop shortcuts

    - by hkBattousai
    I have two external harddrives which I'm using for archiving purposes, because of that they are turned off most of the time. I have some shortcuts on the desktop to some directories on these external harddisks. Windows occasionally removes these desktop shortcuts. It happens when the harddisks are turned off. I think it thinks that the shortcuts are broken and no longer needed, and tries to clean the desktop up. How do I prevent this behavior? (OS Version: Windows 7 Ultimate x64 SP1)

    Read the article

  • Dell M4600 with nVidia Quadro 2000M hangs on boot when external monitor is connected

    - by vladeta
    I have a problem with my fresh installed Ubuntu 12.04 LTS on my Dell M4600: nVidia quadro 2000M i7-2860 16GB ram 128GB SSD Dell/Samsung 750GB HDD IPS RGB laptop display When it is connected via DP++ to the external Dell U2311H monitor, it hangs on boot or when wakening from suspend. If I detach the DP cable it boots normally. I have tried all combinations that I have found, as adding to grub: "no splash", "boot=pci", "acpi=off", etc... I have also changed in nVidia X settings that external monitor is the primary one and also tried to delete monitor.xml file. There is no change it hangs each time after grub. It starts to load daemons then both screens are blank and then completely hangs with beep sound. What I discovered is if I detach the cable and wait for about 2 sec after grub starts booting and then physically connect DP cable while the Ubuntu is still booting everything works normally and I have a picture on my external screen while the laptop screen is off, just as I wanted. Do you maybe know how to solve this issue? Thank You.

    Read the article

  • Stop UAC/Secure Desktop from dimming the screen

    - by Florian
    I like the concepts of UAC and the "Secure Desktop" in Windows 7, but I don't like the dimming of the Secure Desktop to prompt for Admin credentials (or OK button to get clicked). However, dimming goes so far that my monitor regularly goes into PowerSaver mode, which is annoying (as it takes another 10 seconds for it to wake up), and might harm the monitor: two weeks after switching from XP to Windows 7, my 30" monitor stayed black and it had to get replaced. The web is full of tips how to turn off dimming, but that will always also turn off the "Secure Desktop". Is there a way to present the Secure Desktop without dimming? Or with a different visual effect to show that it is the Secure Desktop? EDIT: To clarify, I'm not looking for a way to disable dimming by disabling Secure Desktop (as is done by lowering the UAC level). I want to keep both UAC and Secure Desktop.

    Read the article

  • How do I fix broken installation?

    - by Daniel
    Let me start off by saying I'm dual booting 11.04 and Windows 7 on a Thinkpad T61p. The problem may have arisen when I hit the power button during normal startup. I'm fully aware how stupid this is. I don't know why I did it. I did it. Now, I can't get in to Ubuntu. Windows works fine. But when I try to start Ubuntu normally, it seems to run some checks, and does not start up. Sometimes, I see a black screen, and it tells me that it's running certain checks, and then, [ok]. Like... Battery Check Somethingorother [ok] It'll give me 1-5 of these. And then it just does nothing, and I have to turn it off. When I try to start in safe mode... I tried low graphics mode, and after going through a couple of dialogue boxes, I'm brought right back to the safe mode dialogue box. And if I hit 'resume,' a shell pushes up (still that grey on black "your computer is broken" type shell) and asks me to log in. I do, and try to run unity. It tells me something along the lines of: WARNING no DISPLAY variable set and then sets it to " :0" , which doesn't work. And then I can't do anything, really, and I have to restart. (I don't know how to do this from the command line, so I just hard reset. That command would be helpful). Does anybody have any idea how I can get Ubuntu working right again? FTP is less pleasant in Explorer than it is in Nautilus or w/e it is now.

    Read the article

  • Has ec2 made self-hosting possible for 'amateur' sysadmins possible?

    - by Blankman
    I'm a developer, and it seems ec2 has made it possible for a amateur sysadmin like me to setup and maintain a fairly large set of servers. Now I don't mean to undermine real sys admins, as I know the value of them but what I am trying to get at is that someone like me can setup and maintain a cluster of servers (front end web servers, with some db servers) using tools like ec2 and capistrano with the help of google. Now this isn't something I would do as a long term thing, but as a startup, one-man operation, I think I can pull this off until business takes off and I can hire this important role out. With ec2, I get my firewall, so I basically open up port 80 on my public facing server, which will run haproxy and route requests to my cluster of servers. Ofcourse I am simplifying the setup, but just want a feel for what you guys think about my perception. My application is a web application, that will be runing Ruby on rails (passenger) and talking to mysql or postgresql.

    Read the article

  • How do Unity 12.04/Compiz bindings really work?

    - by Daniel
    There is a bewildering array of places to set bindings, all inconsistent with one another. E.g. in Unity's System Settings having the Ctrl-key highlight the mouse position is an on/off choice. I like the feature, but not on such a prominent key where I keep activating it accidentally. The keyboard shortcuts allow only one binding per command, where I might like a convenient one on the external keyboard and an emergency alternative for when I'm on the road. Keyboard custom shortcuts has a nice interface, but allows only key bindings — besides it doesn't seem to work for me. So I activated CCSM Commands. There I have the choice of key-, mouse- and/or edge bindings. Whereas some places in CCSM offers only one or two of these binding possibilities, randomly at the whim of the programmer. I have not found a way to differentiate a mouse-drag from a click. E.g. I want <SuperMouse1-drag anywhere on a window to move it, while if I don't drag, it should be raise-lower. On the title bar I want the same without needing the <Super key. Now I find raise-lower only in System Settings where I can't assign a mouse binding. If therefore in CCSM I fallback to only lower and put move on the same binding, the window already gets lowered on mouse down, and I can then invisibly move it. Very useful! I have <Altasciicircum get in the way of an Emacs binding, with some to me useless popup overlay. I can find it nowhere, so I can't turn it off. So how can I go without these frontends until they have matured, and instruct Compiz directly, for example in the way Emacs or Sawfish have keymaps, and separate ones for each context, with inheritance?

    Read the article

  • How do I make the Windows low memory warning less sensitive?

    - by Stephen
    I keep getting this annoying low memory warning/prompt to close games I play. It happens very often and I still have ~6 gigs of ram free. I disabled virtual memory because it was putting stuff on the pagefile when I had 10 gigs free ram so that spiked my disk usage. Is there any way to disable this warning? I have 16GB ram so it shouldn't be an issue. I would prefer to keep pagefiles off because my HD is very loud so it's nice to keep it spun down as much as possible. I don't want to disable it completely. Ideally, I would like it to go off when I have ~2GB left rather than 6, but if this isn't viable, I may just disable it completely.

    Read the article

  • Why does plymouth start so late?

    - by Marky
    It appears that starting with 11.04 Plymouth starts so late in the boot process. Sometimes I only have a split second to see it before it transitions to the login screen. This is the same for 11.10. Compared to 10.04 and 10.10, Plymouth starts only a couple seconds or so after Grub and is very visible within the entire boot process. Is there something that can be done to have Plymouth run earlier? I have experienced this on 3 different machines and on 2 of these machines, I've been running Ubuntu since 10.04. So it's not just my notebook's hardware that is causing this. *One a side note, the boot process is one of the ugliest parts of modern Linux. Ubuntu is not excluded. After almost a decade, (I forget but was bootsplash the first?) this still has only been partly solved. For a couple of seconds ugly text is still seen when shutting down. On several ocassions, the same ugly text is seen when logging out of a session. It's never as smooth as you want it to be. Splash themes are great, don't get me wrong. It's just the transitions that are way off and you get glimpses of what's underneath. I'm used to this but for those new to Ubuntu and coming from Windows. It is a turn off.* pardon the rant. :)

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >