Search Results

Search found 10166 results on 407 pages for 'fcs release'.

Page 253/407 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Shuttle FB51 mobo does not boot with external USB drive attached [closed]

    - by user127236
    I am repurposing an old Alienware desktop as a home media server. The PC is based on the Shuttle FB51 motherboard. The BIOS is a Phoenix Version 6.00 PG, release date 12/16/2002. I have loaded Ubuntu 12.04 LTS on the internal hard drive. I am using a Western Digital WD Elements 1.5 TB USB 2.0 Desktop External Hard Drive for media storage. When the external drive is plugged in and the PC is powered on, it freezes very early in the BIOS self-test, even before it begins the memory test. If I unplug the drive, the self-test proceeds without further problems. I can plug the USB drive back in when the self-test is complete, and Ubuntu will boot and find the external drive normally. I've tried several changes to the BIOS setup without finding a cure for the boot issue. Any assistance gratefully accepted. JGB

    Read the article

  • SSH only works after intentionally failed password

    - by pyraz
    So, I'm having a rather weird problem. I have a server, that when I try to SSH into, immediately closes the connection if I type in the correct password on the first attempt. However, if I purposefully enter a wrong password on the first attempt, and then enter a correct password at the second or third prompt, it successfully logs me into the computer. Similarly, when I try to use public key authentication, I get an immediate closed connection. If, however, I enter a wrong password for my key file, followed by another wrong password once it reverts to password authentication, I can successfully log in as long as I provide the correct password at the second or third prompt. The machine is running Red Hat Enterprise Linux Server release 6.2 (Santiago), and is using LDAP and PAM for authentication. Any ideas on where to start debugging this one? Let me know what config files I need to provide and I'll be happy to do so.

    Read the article

  • oracle virtualbox doesn't work for host Kubuntu since Lucid Lynx 10.04

    - by 13east
    I have a thinkpad edge 14 core i3 2.4 ghz and 4g ram; I have tried kubuntu 10.04, 10.10, 11.10, 12.04 and 12.10 (all x64 architecture); Both oracle and ose virtualbox only works properly to install XP and windows 7 quest system on kubuntu 10.04; For every other kubuntu release since, the guest installation goes as far as formatting the virtual drive, freezes at this step, and doesn't even go as far as copying files to hard-drive to begin installation. But virtualbox has not stopped responding to commands; I can kill that one specific window with the problem installation ("machine" - "close" - "power off the machine") and start over again without having to force-kill virtualbox application. If anyone knows how I can go about addressing this problem, any help you can provide would be very much appreciated. Thank You.

    Read the article

  • Establish direct cable connection between Windows 8 PCs in home network

    - by Marie. P.
    I'm running two PCs, a desktop and a laptop with Windows 8 Release Preview ("Build 8400"). They are connected to the same router in infrastructure mode, thereby having wireless internet. Due to often file synchronization between the machines I want to establish a cable connection that allows direct file transfer, without needing to use the wireless. When I plug in the cable (normal, not cross-over), I see in "Control Panel\Network and Internet\Network Connections": "Ethernet - unidentified Network" on both PCs. Transferring a file between both still only uses the WiFi via the Router. I noticed that when turning off the wifi on one PC, I can set up a shared internet connection that will work via Ethernet-cable, but since sometimes only one PC runs, sometimes the other one, I do not want to have the internet of one machine to be dependent on the other one being switched on. I do not have a crossover-cable, but since I did connect the PCs already successfully (just without both being on the internet), I'm sure that this should also work with a normal ethernet cable.

    Read the article

  • Any way to get back Chrome's Dialog box for cache clearing instead of the new tab?

    - by Stuart P.
    As of today's release of chrome (Tuesday, March 8, 2011) on both Mac & PC the settings are now in a tab (chrome://settings/advanced), needless to say when you're clearing your cache very frequently (cmd-shift-delete on mac, cntl+shift+delete on PC) it's quite tedious going back and forth in tabs. The click & clean chrome extension doesn't have a mac counterpart (plus I like the keyboard much more than the mouse). I've searched and have yet to find a way to get a dialog box instead of the new tab.

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • PCs with Wired Connection Keep Losing Internet Connectivity

    - by user717452
    I have a U-verse ATT DSL Internet at our church building. Recently, any computer that is plugged into the network via Ethernet keeps losing its internet connectivity. The setup is from the Modem to a Netgear Wireless Router, and from the router to 3 different computers. None of the laptops that use Wifi ever lose internet, just the wired ones, and I end up having to do ipconfig /release and /renew every day to get it back. One computer has Windows 7 and one has Windows XP. Any ideas as to what is going on with our network?

    Read the article

  • rsync windows to linux permission denied

    - by user64908
    Using Command rsync -avzP --delete --omit-dir-times ../../ [email protected]:/var/www/mysite/ I'm getting rsync: mkstemp "/var/www/mysite/.." failed: Permission denied (13) If ext is in the www-data group should I still set all the files to be owned by user www-data? I am trying to publish the files with rsync and then set the permissions using sudo chown -R www-data doc sudo chgrp -R www-data doc but I can't even rsync because of the permission denied. The SSH works fine, the rsync too except when it tries to write over or update some of the files in /var/www Client * Windows 7 * Cygwin 1.7.16 (GNU bash, version 4.1.10(4)-release (i686-pc-cygwin)) * rsync version 3.0.9 protocol version 30 Server * Ubuntu 12.04 * Apache2 * Root Accounts [ubuntu,ext] * Groups [www-data] * sudo vigr has www-data:x:33:ubuntu,ext I have already configure this http://stackoverflow.com/questions/2124169/cwrsync-ignores-nontsec-on-windows-7 This article has also managed to confuse me http://unix.stackexchange.com/questions/41687/how-should-i-rsync-files-in-var-www-if-i-want-them-to-be-owned-by-www-data What is the right procedure?

    Read the article

  • Windows server response time very high

    - by Nagaraju Bandla
    Server Specs Windows Server 2008 R2 64 bit Provider : Fasthosts .Net Framework: 4.0 6 GB RAM (its using 4.6 GB) i have a website with thousands of pages structured like folderone/1/one to 500.aspx folderone/2/one to 500.aspx . . folderone/500/one to 500.aspx To load this pages for the first time after the release, for each folder it takes about 20 to 30 minutes and once one page is loaded the rest of the pages loads fine. This happens for all folders. And this repeats every time i restart the server, when a added anything to app_code or if i change the web.config. My site is mainly works Google and due to this problem its giving errors. Any help will be highly appreciated please. i am happy to buy a beer for you if its resolved. Thanks in advance...

    Read the article

  • Windows 8 taking 4+ mins to shutdown

    - by arnab321
    I did a fresh installation of Windows 8 64 bit, build 9200 (released on aug 16th). I installed the drivers and some basic softwares like NetBeans, mingw, iis server and php. For the first few times, it was restarting normally. But then at shutdown, it would show the shutdown screen for some seconds and then turn black for about 4 mins (similar to what happens at hibernation). I disabled the "fast startup" option in power options, but the problem still persists. Windows 7 and Ubuntu shut down normally. specs: 4gb ram, 750 gb sata hdd, solved by installing Windows Updates released during October. It was a serious bug in the OS, afaik. Now even hibernate takes upto 30 secs max. Still, win 8 is too buggy for release.

    Read the article

  • Upgrading phpmyadmin (and other packages) on Debian Squeeze

    - by westexasman
    I just setup a new VM with Debian Squeeze (latest stable release, 6.0.4). I am going for a webserver, so I installed the usual... apache, php5, mysql, phpmyadmin, etc. Everything went well, everything is working. My question is about upgrading packages. I noticed the phpmyadmin version is 3.3.7... the latest is 3.4.10.1. Doing apt-get update/upgrade does not upgrade the package. How does one go about upgrading packages on a Debian Squeeze server if apt-get update/upgrade does not work? Thanks!

    Read the article

  • What is the best ways to duplicate DVDs in bulk?

    - by Axxmasterr
    I have some instructional videos I am getting ready to release on DVD and I want to know what is the quickest and most cost effective way to produce these in bulk? I am open to both customized PC based software/hardware solutions as well as dedicated hardware appliances which perform the same function. All options considered seriously. I don't have a problem building a system for this purpose. If I build something I would prefer it have the ability to make multiple copies at once. I figure I will need to make about 300 copies initially.

    Read the article

  • Managing MS Exchange server-side email rules on Mac OS X?

    - by Doug Harris
    Has anybody found an easy way to manage server-side rules from Mac OS X? Here's a brief list of what I know doesn't work: Entourage 2008 - it supports client rules, but not server rules. No good, there are certain actions that should happen before I open my laptop or check my email on my iPhone. Apple Mail - same as Entourage, but at least I don't get as frustrated since, unlike Entourage, this isn't a Microsoft product. Web mail (aka Outlook Web Access) - perhaps you can manage rule in the fancy version which Exchange serves to IE, but not with the browsers available on a Mac. I manage this now by launching a VMWare virtual machine running Windows XP and Outlook. I don't count that as an easy way. Update, post release of Office 2011 Does MS Outlook 2011 have the ability to manage server-side rules? Update, post installation of Office 2011 No. Outlook 2011 doesn't have this ability. I've already removed my account from Outlook and switched back to Apple Mail and iCal

    Read the article

  • Getting some perl script errors on execution of svnnotify

    - by user2474633
    I installed svnnotify:2.84, perl module 5.10 for subversion 1.7.11 on redhat release 6. And i am using this command in post-commit hooks to get notified svnnotify --repos-path "$1" --revision "$2" --from [email protected] \ --to-regex-map [email protected]="branches/Test_branch12" \ --smtp xxxxxx.com HTML::ColorDiff >> /tmp/notify.txt 2>&1 once the commit is successful i can see the below mentioned error in the output file. Use of uninitialized value $[0] in exec at /usr/local/share/perl5/SVN/Notify.pm line 2332. Can't exec "": No such file or directory at /usr/local/share/perl5/SVN/Notify.pm line 2332. Use of uninitialized value $[0] in concatenation (.) or string at /usr/local/share/perl5/SVN/Notify.pm line 2332. Cannot exec : No such file or directory Child process exited: 512 Can anyone help on this.

    Read the article

  • How do I run multiple commands on one line in Powershell?

    - by David
    In cmd prompt, you can run two commands on one line like so: ipconfig /release & ipconfig /renew When I run this command in PowerShell, I get: Ampersand not allowed. The & operator is reserved for future use Does PowerShell have an operator that allows me to quickly produce the equivalent of & in cmd prompt? Any method of running two commands in one line will do. I know that I can make a script, but I'm looking for something a little more off the cuff.

    Read the article

  • Can I manually add a thumbnail web page to the Chrome homepage?

    - by andygrunt
    Is there a way to manually add a web page to the 8 thumbnails that appear when you open Chrome? I want to add Google Maps for easy access. The Google front page already shows up so I presume the Google Maps page never will as it's a subset of Google (i.e. I get to it via the main Google front page). I know I can add a shortcut to the toolbar but would rather add it as a thumbnail. And while I'm here, is it possible to increase the number of thumbnails that appear? I'm running the release/stable version of Chrome in Windows XP.

    Read the article

  • Duplicating an instance into a new VPC from a Snapshot

    - by Remmus
    We have a group of instances in an Amazon VPC we use for our live environment. We have a big release to do and want to test that the deployment will run smoothly. I have created a second VPC, created instances of the same size on the same private ips and then removed their original volumes and attached new volumes that were created from snapshots of the live environment. Unfortunately none of the instance will allow me to connect. They start running fine, but I don't get any system logs appear and can't connect. The only thing I can think of is that the new instance was created from a new AMI as the old one is deprecated due to new security fixes. Is this a problem? If so can I fix it in any way? And if this isn't a problem, does anyone have any ideas how I can fix it?

    Read the article

  • Thunderbird: filters don't match links

    - by Gregory MOUSSAT
    I use filters to remove some undesirable messages (in addition to the intergrated spam filter). This is great to avoid tons of boring people who want to sell me tons of boring stuff. My problem is, since years (so with every Thunderbird release I ever had, even the current one which is up to date) it is unable to filter links. For example I want to delete every messages containing a link to http://xxxxx.emv3.com/xxxxxx I never managed to remove those emails. I use a filter on the body, checking if it contains emv3 but this never match. Those emails are in HTML format, and the links are displayed as a text like "Visit our website" or so. If I write a HTML email with a link, my filter works. When this is a spam, this never works. When I save the email to a text file, I open it with notepad and I see several http://xxxxx.emv3.com/xxxx Any idea why this don't work ? And how can I do ?

    Read the article

  • When should I upgrade to Ubuntu 10.04 (Lucid Lynx)? [closed]

    - by Emyr
    I'm a web developer for a small non-IT firm. When 9.10 came out, I was using it with no adverse effects from about a month before release (iirc, first beta), initially as an upgrade but as a clean install later to ensure my system would be consistent with most other 9.10 systems. The last alpha of 10.04 came out last week, with another 2 weeks before beta. I'm quite eager to do it today, but obviously the usual "not for production systems" notice is still in place. When should I upgrade? Do I need to worry about software installed from source? (./configure, make, make install etc) Is the attraction of a non-brown theme really this tempting for you?

    Read the article

  • Looking for 'WinHlp32.exe compatible' replacement for free redistribution under vista and windows 7

    - by richardboon
    Our software installs a package of legacy software for the client, some of it has old hlp file from 3rd party vendor requiring winhlp32.exe (note: we have no legal right to modify the hlp). Those client may only have cd/dvd and might not have internet access, etc. So I need a free 'WinHlp32.exe compatible' replacement for our redistribution under vista and windows 7. Background of problem: -Microsoft stopped including the 32-bit Help file viewer in Windows releases beginning with Windows Vista and Windows Server 2008. -Starting with the release of Windows Vista and Windows Server 2008, third-party software developers are no longer authorized to redistribute WinHlp32.exe with their programs. http://support.microsoft.com/kb/917607

    Read the article

  • Can DVI look different?

    - by queueoverflow
    I have a desktop with an nVidia GT 9500 which has two DVI slots. I have a 24" TN-Panel TFT on that and it looks pretty good. Then I hooked it onto the DisplayPort of my ThinkPad with the Intel HD 3000 card with an DP-DVI adapter and I think that the fonts look even clearer. Both computers run Kubuntu 12.04, although the ThinkPad is freshly set up, while the desktop is migrated from the previous release. Is this a hardware thing or is it just some setting in the anti-aliasing or sub-pixel-hinting?

    Read the article

  • What is the best Linux distro for a php web server? [on hold]

    - by benjisail
    We are planning to upgrade our hardware and at the same time we plan to reinstall all our web server from a fresh OS. Currently our web server is running on CentOS 4.7 on a dedicated server. We are using Apache, Mysql, PHP, SVN, FTP and all the needed tools for a web server managed through SSH. We plan to use a cloud server for the new web server. I don't know which Linux distro to take for this new server. Should I stay with Centos and just take the latest release 5.4 or should I switch to something else like a Debian base distro (Ubuntu Server)? The thing that I didn't like with CentOS was the none availability of the latest version of PHP and Apache on Yum. This make it harder to keep our webserver updated with the latest technologies... Thanks for your help!

    Read the article

  • Features and components used in Google Chrome taken from Firefox

    - by tobylane
    20 Things I Learned About Browsers and the Web says, in the following passage, that Chrome has taken things from Firefox: Open-source software plays a big role in many parts of the web, including today’s web browsers. The release of the open-source browser Mozilla Firefox paved the way for many exciting new browser innovations. Google Chrome was built with some components from Mozilla Firefox and with the open-source rendering engine WebKit, among others. In the same spirit, the code for Chrome was made open source so that the global web community could use Chrome’s innovations in their own products, or even improve on the original Chrome source code. Does anyone know what those components are?

    Read the article

  • Updating solr on a ColdFusion 9 install?

    - by Jordan Reiter
    I'm thinking about upgrading the solr install included with ColdFusion 9 to the latest Apache release. This raises a few questions: Is there a compelling reason not to upgrade to 3.6 (is it slower than, more cumbersome than, or backwards-incompatible with 1.4) altogether? The solr install included with CF9 is customized. Is there a way to customize it myself, or to at least fool CF into treating it like its predecessor? Will all of my existing indexes work as-is (are?) with the new version? Has anyone out there on ServerFault done the upgrade? I'm especially interested in hearing about unforeseen or unexpected effects from the upgrade.

    Read the article

  • How do I find out when and by whom a particular user was deleted in linux?

    - by executor21
    I've recently ran into a very odd occurrence on one system I'm using. For no apparent reason, my user account was deleted, although the home directory is still there. I have root access, so I can restore the account, but first, I want to know how this happened, and exactly when. Inspecting the root's .bash_history file and the "last" command gave nothing, and I'm (well, was) the only sudoer on the system. How would I know when this deletion happened? The distro is CentOS release 5.4 (Final), if that helps.

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >