Search Results

Search found 2486 results on 100 pages for 'canary channel'.

Page 36/100 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • Sshfs is not working..

    - by Devrim
    Hi, When I run sshpass -p 'mypass' sshfs 'root'@'68.19.40.16':/ '/dir' -o StrictHostKeyChecking=no,debug It successfully mounts but it runs on foreground. When I run without 'debug' parameter, it doesn't mount at all. Server is ubuntu 8.04 Any ideas why? UPDATE: When I run the command as ROOT it does mount. It doesn't work with other users. here is the output of an unsuccessful mount $ sshpass -p 'pass' sshfs 'root'@'68.1.1.1':/ '/s6' -o StrictHostKeyChecking=no,sshfs_debug,loglevel=debug debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 68.1.1.1 [68.1.1.1] port 22. debug1: Connection established. debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa type -1 debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY Warning: Permanently added '68.1.1.1' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.UTF-8 debug1: Sending subsystem: sftp Server version: 3 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: Killed by signal 1.

    Read the article

  • How to configure a Router (TL-WR1043ND) to work in WDS mode?

    - by LanceBaynes
    I have a WRT160NL router (192.168.1.0/24 - OpenWrt 10.04) as AP. It's: - WAN port: connected to the ISP - WLAN: working as an AP, using 64 bit WEP/SSID: "MYWORKINGSSID", channel 5, using password: "MYPASSWORDHERE" - It's IP Address is: 192.168.1.1 Ok! It's working great! But: I have a TL-WR1043ND router that I want to configure as a "WDS". (My purpose is to extend the wireless range of the original WRT160NL.) Here is how I configure the TL-WR1043ND: 1) I enable WDS bridging. 2) In the "Survey" I select my already working network. 3) I set up the encryption (exact same like the already working one) 4) I choose channel 5 5) I type the SSID 6) I disable the DHCP server on it. After I reboot the router and connect to this router (TL-WR1043ND) over wireless I'm trying to ping google.com. From the ping I see that I can reach this router, that's ok, but it seems like that this router can't connect to the original one, the WRT160NL (so I don't get ping reply from Google). The encryption settings/password is good I checked it many-many-many times. what could be the problem? I'm thinking it could be a routing problem, but what should I add to the "Static Routing" menu? I tried to change the IP address of the TL-WR1043ND to: 192.168.1.2 So if this a routing issue then I should add a static routing rule that says: If destination: any then forward the packet to 192.168.1.1 p.s.: I updated the Firmware to the latest version. It's still the same. p.s.2: The HW version of the TL-WR1043ND is 1.8 p.s.3: Could that be the problem that I use different routers? (If I would buy.. another TL-WR1043ND and use it instead of the WRT160NL, and with normal Firmware, not OpenWrt, then it would work?? The "WDS" is different on different routers?) p.s.4: I will try to check the router logs@night - and paste it here! :\

    Read the article

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • How to kill unkillable Python-processes running as root

    - by Andrei
    I am experiencing an annoying problem with sshuttle running it on 10.7.3, MBA with the latest firmware update -- after I stop it (ctrl+c twice), or loose connection, or close the lid, I cannot restore it until I restart the system. The restarting takes notably more time, than it would normally take. I have tried to flush ipfw rules - not helping. Could you advice me how to restore sshuttle connection (without restarting os)? The following processes remain running as root, which I do not know how to kill (tried sudo kill -9 <pid> with no luck): root 14464 python ./main.py python -v -v --firewall 12296 12296 root 14396 python ./main.py python -v -v --firewall 12297 12297 root 14306 python ./main.py python -v -v --firewall 12298 12298 root 3678 python ./main.py python -v -v --firewall 12299 12299 root 2263 python ./main.py python -v -v --firewall 12300 12300 The command I use to run proxy: ./sshuttle --dns -r [email protected] 10.0.0.0/8 -vv The last message I get trying to restore the connection: ... firewall manager: starting transproxy. s: Ready: 1 r=[4] w=[] x=[] s: < channel=0 cmd=PING len=7 s: > channel=0 cmd=PONG len=7 (fullness=554) s: mux wrote: 15/15 s: Waiting: 1 r=[4] w=[] x=[] (fullness=561/0) >> ipfw -q add 12300 check-state ip from any to any >> ipfw -q add 12300 skipto 12301 tcp from any to 127.0.0.0/8 >> ipfw -q add 12300 fwd 127.0.0.1,12300 tcp from any to 10.0.0.0/8 not ipttl 42 keep-state setup >> ipfw -q add 12300 divert 12300 udp from any to 10.0.1.1/32 53 not ipttl 42 >> ipfw -q add 12300 divert 12300 udp from any 12300 to any not ipttl 42 Update: $ ps -ax|grep python 1611 ?? 0:06.49 python ./main.py python -v -v --firewall 12300 12300 48844 ?? 0:00.05 python ./main.py python -v -v --firewall 12299 12299 49538 ttys000 0:00.00 grep python

    Read the article

  • Installing SATA dvd burner on machine with no spare SATA ports/connectors

    - by Faheem Mitha
    Greetings. I have the following motherboard Tyan Thunder K8WE S2895A2NRF Motherboard - extended ATX - nForce Pro 2200/2050 - Socket 940 - UDMA133, Serial ATA-300 (RAID) - 2 x Gigabit Ethernet - FireWire - 6-1 channel audio This is part of a computer that was assembled in the winter of 2006/2007. The user manual says the following with regard to SATA Integrated SATAII Generation 1 Controllers (from NForce Professional 2200) Two integrated dual port SATA II controllers Four SATA connectors support up to four drives 3 Gb/s per direction per channel NvRAID v2.0 support Supports RAID 0, 1, 0+1 and JBOD. I just purchased a SATA DVD burner. Here is the page for the product http://www.amazon.ca/gp/product/B002QGDWLK/ The problem I am facing is that I already have 4 SATA drives installed. I don't want to remove any of them. However, I want the DVD burner above installed as well. The person I am consulting with here (Bombay, India) tells me that my four available SATA ports are filled, and that my only option is to install a SATA card into the one free PCI slot on the motherboard. However, he says that with this setup I will not be able to boot from the DVD drive. Are these statements correct, and what are my other options if any? Even it the statements in the last para are true, I suppose I could use one of the motherboard connectors/ports there are currently being used with the hard drives with the DVD drive, and use the "add-on" connector with one of the hard drives. Not all the 4 hard drives need to be bootable. BTW, despite having read through http://en.wikipedia.org/wiki/Serial_ATA#Cables.2C_connectors.2C_and_ports I am fuzzy on the differences between connectors, cables and ports. Thanks in advance.

    Read the article

  • Improving abysmal 802.11n wireless network

    - by concept
    I am in desperate need of help to improve the abysmal performance of my 802.11n wireless network. At best I get 30Mbs (this is an internet download) from a technology that boasts 300Mbs, even worse is the LAN where to date best i have ever gotten is 1Mbs. It is literally quicker to copy the file to a USB and walk it to the other computer. Infrastructure is this AP 802.11n only broadcasting at both 2.4GHz and 5GHz Mac with 802.11a/b/g/n card is connected to the AP via 5GHz Linux with 802.11a/b/g/n card is connected to AP via 2.4GHz I have conducted the following tests (results at end of post) Internet based speed test wired and wireless LAN file copy wired and wireless I have read: http://nutsaboutnets.com/troubleshooting-wi-fi-problems/ http://www.smallnetbuilder.com/wireless/wireless-basics/30664-5-ways-to-fix-slow-80211n-- speed http colon //www.wi-fiplanet dot com/tutorials/7-tips-to-increase-wi-fi-performance.html Slow file transfer on network between two 802.11n laptops (connected directly together via access point) Wireless Network Performance Issues Slower than expected 802.11n wireless network speeds I have made the following optimizations AP broadcasts only 802.11n on both 2.4GHz and 5GHz frequencies 2.4GHz is on a channel with least interference (live in an apartment with lots of APs), this did make a 10Mb/sec improvement Our AP is the only one transmitting on the 5GHz freq. Security: WPA Personal WPA2 AES encryption Bandwidth: 20MHz / 40MHz (i assume this to be channel bonding) I have tried the following with 0 improvement Dropped the Fragment Threshold to 512 Dropped the Request To Send (RTS) Threshold to 512 and 1 Even thought of buying a frequency spectrum analyzer, until i saw the cost of them!!! Speed test results Linux Wired: DOWNLOAD 128.40Mb/s UPLOAD 10.62Mb/s www dot speedtest dot net/my-result/2948381853 Mac Wired: DOWNLOAD 118.02Mb/s UPLOAD 10.56Mb/s www dot speedtest dot net/my-result/2948384406 Linux Wireless: DOWNLOAD 23.99Mb/s UPLOAD 10.31Mb/s www.speedtest dot net/my-result/2948394990 Mac Wireless: DOWNLOAD 22.55Mb/s UPLOAD 10.36Mb/s www.speedtest dot net/my-result/2948396489 LAN NFS 53,345,087 bytes (51Mb) file Linux Mac NFS Wired: 65.6959 Mb/sec Linux Mac NFS Wireless: .9443 Mb/sec All help is appreciated, even testing methods will be accepted.

    Read the article

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • How seriously should I take ECC correctable error warnings?

    - by David Mackintosh
    I have a pile of Sun X2200-M2 servers. These servers have ECC memory. In some of these servers, I am getting warnings in the eLOM about "correctable ECC errors detected", eg: # ssh regress11 ipmitool sel elist 1 | 05/20/2010 | 14:20:27 | Memory CPU0 DIMM2 | Correctable ECC | Asserted 2 | 05/20/2010 | 14:33:47 | Memory CPU0 DIMM2 | Correctable ECC | Asserted ...some more frequently than others. The kernel on this particular system is throwing EDAC errors as well, although with far more frequency than the eLOM is recording ECC events: EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x42a194, offset 0x60, grain 8, syndrome 0xf654, row 4, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x48cb94, offset 0x10, grain 8, syndrome 0xf654, row 5, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error Now if the server is detecting Uncorrectable ECC, the system resets, so clearly that's bad and removing/replacing the identified stick or pair corrects the issue. But I am thinking that if the error is Correctable, then there's no immediate issue -- I can treat this as a warning and be prepared to pull the stick/pair if an uncorrectable error starts occurring?

    Read the article

  • IIS 6 getting "Page Not Found" after applying SSL

    - by Dominic Zukiewicz
    I am setting up SSL certificates on a development environment using IIS 6 on W2k3. I have a directory called login with a single page login.asp which I would like only viewable over SSL. So before installing or applying SSL permissions, the page is viewable through a browser. I can browse the page and it redirects etc. and all is good. However Basic Authentication is Base64 encoded so I want to secure the traffic from this page only. I have created a dummy certificate in makecert, installed it and added it to IIS. IIS is happy that it is trusted. I have selected the directory of login and child files to "Require SSL channel". When I refresh my browser on login/login.asp I get a "404: Page Not Found" in IE 8. So 2 issues here The page is now unviewable when using HTTPS. They must manually type the HTTPS (minor inconvenience for now) If I turn off "Require SSL Channel" from IIS, it works again. What part of the process am I missing as I have followed several tutorials on installed SSL certificates, but still come across this barrier.

    Read the article

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • How to calculate RAM value on performance per dollar spent

    - by Stucko
    Hi, I'm trying to make decisions on buying a new PC. I have most specifications (processor/graphic card/hard disk) pin-downed except for RAM. I am wondering what is the best RAM configuration for the amount of money I'm spending. As the question of best is subjective, I'd like to know how would I calculate the value of RAM sticks sold. 1.(sample)The value of amount of memory: 1) CORSAIR PC1333 D3 2GB = costs $80 2) CORSAIR PC1333 D3 4GB = costs $190 would it be better to buy 2 of item 1) instead of 1 of item 2) ?? Although I would normally choose to have 1 of 2) as the difference is only (190-(80*2)) = 30 as I would save 1 DIMM slot, What I need is the value per amount: 1) 80/ 2 = $40 per 1GB 2) 190/ 4 = $47.5 per 1GB 2. The value of frequency: 1) CORSAIR PC1333 4GB = costs 190 2) CORSAIR PC1600C7 4GB = costs 325 Im not even sure of the denominator ... $ per 1 ghz speed? 3. The value of latency: 1) CORSAIR CMP1600C8 8-8-8-24 2GBx3 (triple channel) = costs 589 2) CORSAIR CMP1600C7D 7-7-7-20 2GBx3 (triple channel) = costs 880 Im not even sure of the denominator ... $ per 1 ghz speed? Just for your information i'd like to get the best out of the money im going to spend to put on a 6 DIMM slot i7core motherboard.

    Read the article

  • hard drive recognized by bios but not by windows

    - by tehgeekmeister
    I'm adding a new hard drive (A seagate ST31000340NS; I had links in here but I don't have enough reputation to post them. Interestingly, the bios recognizes it as a ST31000340AS, but it was bought as the other number...) to a friend's hp pavilion d4650e (mobo specs; google the model if you want the rest of the info, can't do more than one link.). Have had a hell of a time with it. Finally figured out that the hard drive needed a jumper set to limit the speed to 1.5gbps so the mobo would recognize it, and the bios DOES recognize it now. But not windows (using windows 7), using add new hardware or diskmgmt.msc. According to my friend, who was at the computer when it first booted after adding the jumper, a new hardware found dealio popped up saying something about raid, but I can't provide more info then that since I didn't see it. Ubuntu livecd recognized the drive before we changed the jumper. Haven't checked since then. XP didn't recognize it, that's the OS we started with. Upgraded to 7 hoping it might fix the problem. The only other info I can think of that might be immediately relevant is that the drive is plugged into the fifth sata channel, and the first channel is empty. Is this a problem? I assume not, because the two other drives (in a raid 0) and the cd and dvd drives are also on channels past the first one, and are recognized. Ask questions and I'll update with info!

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • How can I set the CD audio volume in Linux?

    - by user1296362
    In Windows 7 Control Panel - Sound - Sound Properties window there's an slider for setting CD Audio volume: And it's pretty strange that I can't find corresponding one in generic Linux mixers: alsamixer or amixer. I connected a CD drive to try to set CD audio volume with cdcd (CD Player): $ cdcd setvol 0 Invalid volume It isn't actually an invalid volume, it is because ioctl() call fails. I found that out after searching and changing a bit the source code of this utility (in the libcdaudio): --- cdaudio.c.orig 2004-09-09 06:26:20.000000000 +0600 +++ cdaudio.c 2012-05-30 21:34:34.167915521 +0600 @@ -578,8 +578,10 @@ cdvol_data.CDVOLCTRL_BACK_RIGHT_SELECT = CDAUDIO_MAX_VOLUME; #endif - if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) - return -1; + if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) { + printf("*** cd_set_volume: ioctl() returned error\n"); + return -1; + } return 0; } By the way cdcd's get volume command yields rather weird output: Left Right Front 1281734864 32767 Back 0 0 Also I tried aumix: $ aumix -c 0 But all with no success. I read from this manual — http://tldp.org/HOWTO/Alsa-sound-6.html (section 6.2 The mixer) that CD channel can present in amixer output. Maybe some drivers for sound card are missing in my Ubuntu 12.04 LTS installation. Though I don't think it's the case: $ lsmod | grep snd snd_mixer_oss 22602 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 223867 1 snd_hda_intel 33773 4 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 19 snd_mixer_oss,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep ,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm All I need is just mute or set to 0 volume level of CD Audio channel, like I did in Windows 7, to get rid of sibilant noise in the speakers.

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Motherboard running rather hot while gaming

    - by I take Drukqs
    Case: Antec 1200 Mobo: Gigabyte GA-X58A-UD3R CPU: Intel i7 950 (stock cooler) GPU: EVGA GeForce 570 GTX RAM: 2x 2 GB (4 GB total) DDR3 dual-channel Corsair OS: Windows 7 Home Premium 64-bit This is my first build and it's brand new. I had no problems putting it all together in a few hours one evening and I consider myself to be pretty good with computers. Not to brag or anything like that! Just saying I've been fiddling with them since I was in diapers and I have a good amount of experience under my belt, just not with certain things yet. Recently while playing many of the latest games maxed out without a hitch my motherboard has been running hot and like anyone who's ever built a computer it scares the life out of me. I checked HWMonitor and saw that my motherboard sometimes reached temperatures of around 52 - 78c (the number 78 obviously being what's scaring me). I was wondering if such a temperature is normal and if not what the problem could be. Air flow in my case is phenomenal and besides having to ship back a faulty GPU and reseat my CPU my first build has been a very large success which I am enjoying tremendously. There is literally almost no dust in my case due to it being very new as previously mentioned and my RAM sticks are in the correct slots for dual-channel mode. My cable management is pretty great in my opinion with only cables from my PSU lingering in the bottom of the case. At any given opportunity I ran my cables behind my mobo. Air flow should definitely not be a problem because my CPU only goes up to about 60c and my GPU only goes up to about 80c. Thank you very much in advance.

    Read the article

  • Will a 2.4Ghz WAP intefere with a 5.0Ghz WAP if placed directly next to each other

    - by Dan
    This is mostly a curiosity question to people who know more about radio and wi-fi than I. The 2.4Ghz band is massively overpopulated near my house to the point of sometimes getting 1000ms pings to the router from only a few feet away. inSSIDer finds at least 10 broadcasting SSID's within around 15 seconds of starting, so this isn't a real surprise to me! Sometimes I can get good results by changing the channel to something like 3 or 8, but it's usually temporary as the others use Auto Channel and hop around. Now, the router I have is capable of 5.0Ghz, as is the laptop I type this on. Switching to 5.0Ghz gives superb results: I can download at ~90Mbps and get consistent 1ms pings. The problem is that only this laptop supports 5.0Ghz! My question: Would I still get decent 5.0Ghz performance if I place a 2.4Ghz access point directly next to my router? And, indeed, will 2.4Ghz continue working as 'normal'? Testing would be an obvious step, but I threw all my superfluous equipment out in a recent house move. My understanding is that I should get good performance, certainly in comparison to having two devices using the same frequency range, but I do believe there will be some impact by the virtue of them being directly next to each other. (Cabling is not an option due to it being a rented house)

    Read the article

  • Cannot Access Local Network Shares (Strange Schannel and lsass.exe issues)

    - by Fake
    When I browse to my own computer's shares by going to \\MYCOMPUTERNAME\ ; I cannot access any of the shares on my LOCAL machine (nor can I access them remotely) and it generates about 40 of the following errors in my system event log: The following fatal alert was generated: 10. The internal error state is 1203. Details: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Schannel" Guid="{1F678132-5938-4686-9FDC-C8FF68F15C85}" /> <EventID>36888</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2011-04-05T13:52:09.144278900Z" /> <EventRecordID>79628</EventRecordID> <Correlation /> <Execution ProcessID="552" ThreadID="672" /> <Channel>System</Channel> <Computer>DEVELOP4.CONTOSO.COM</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="AlertDesc">10</Data> <Data Name="ErrorState">1203</Data> </EventData> </Event> Additonal information: The process that is generating the error is lsass.exe OS: Windows7 Professional x64 Joined to Domain: Yes I was able to access the shares locally in the past I am having the same issue on 3 other computers that have similar configurations Any help would be greatly appreciated, because I have no idea what's wrong. Thanks!

    Read the article

  • Cisco Access switch is dropping large amount of end points

    - by user135458
    This afternoon, with no changes to the network, a switch suddenly started dropping off lots of connections. These connections would come back up a few minutes later, then another area connected to the switch would drop off. This is an older 4006 chassis switch which could in and of itself be a problem but I'm looking to see what else you all would look for in trying to find a root cause. Switch is connected via ports 1/1 and 1/2 in an etherchannel to a VSS core 1/1/42 and 2/1/42. Both sides are up and working however the CPU on the switch will spike up to 99% and that's when CRC errors start to hit the VSS core on one of those interfaces and end points start dropping off. We tried new transceivers and SFP's on each side of the link, same result. When we tried swapping the fiber patch cables on the access switch the CRC errors did not follow the fiber cables they stayed with port 1/2 on the access switch. So port 1/2 on the supervisor module looks like the culprit. We actually tried to create a new member of the ethernet channel by taking a fiber media converter to cat5 and make that a member of the port-channel but when we plugged it in you couldn't even reach the switch. I'm guessing that's unrelated and a problem with the media converter. As of right now we have left it in a state of only one fiber cable running to one side of the VSS core (1/1 Access Switch -- 2/1/42). I've sent some info into TAC and they are looking into the situation but does anyone else have any commands I could run or some troubleshooting I could look into in the meantime?

    Read the article

  • Netgear router keeps disconnecting iPhone

    - by DisgruntledGoat
    My old router (Voyager 2091) packed up so I just got a new router - a Netgear N150 model DGN1000. My laptop connects OK wirelessly, but my iPhone 4S is constantly getting "disconnected" - it has perfect wifi signal and is seemingly connected to the router, but no pages load (it says "server cannot be found"). If I disconnect manually ("forget this network") then reconnect, it works fine again for a random amount of time (usually 10-30 minutes) then I get the same problem again. I've done some searching and this appears to be a known problem - there are dozens of forum posts out there lamenting similar connection problems. The only advice I have seen is to set a specific channel under Wireless Settings on the router CP, although every forum post recommends a different channel! 1, 3, 5, 6, 11... I have tried them all for hours at a time and get the exact same problem. The firmware is up to date. Is there an actual solution for this, or do I need to get a different router just to be able to use my iPhone?

    Read the article

  • Auto-rotate rotated images with mogrify

    - by Frank Presencia Fandos
    Some of my images have been taken rotated but kept this data. The problem is that, when using mogrify to convert them from JPG to png, that data seems to dissapear. For showing this problem, I think the best is to show the script and an screenshot. Script with the code. Put it in a text file, give it execution permission, double click, run (from terminal if you wish) and wait a while. All the JPGs in that folder will be converted to png. #! /bin/bash echo "Converting JPG to png. Please don't close this window." mogrify -alpha on -format png *.JPG mogrify -alpha on -format -alpha on png *.jpg It works great and adds an alpha channel. This is personally useful when I edit them later, not to add the channel individually. Now the screenshot that illustrates the problem: As you can see, the original ones' (JPGs) preview is right, the modified preview is wrong, the Shotwell rendering is right and the GIMP edit is wrong and didn't even say the image was rotated, as it uses to do with other images. How can I edit my script to preserve the orientation?

    Read the article

  • ffmpeg: video file played OK on Ubuntu, but no sound on XP

    - by Andy Le
    I created a video clip using ffmpeg (vcodec: mpeg2video, acodec: AC3 5.1). The file can be played normally on Ubuntu, but when I play it on an XP machine, there is no sound. I can play AC3 files and other movies with AC3 sound. I already tried many codec packs and many players. When I compare the MediaInfo tab of the Properties window of the file with another playable movie, I see that the Audio Identifier of the audio stream in my file is 0x80 while it is 0x02 in the other movie. So I guess that's why players on XP can't recognize the audio codec. When I use an MKV container instead of MPEG (still mpeg2video codec), then the result is OK on both Ubuntu and XP (with the correct Audio ID). I really need MPEG though. Any idea? This is the command I used: ~/ffmpeg/ffmpeg/ffmpeg -loop_input \ -t 97 -r 30000/1001 -i v%4d.tga -i final.ac3 \ -vcodec mpeg2video -qscale 1 -s 400x400 -r 30000/1001 \ -acodec copy -y out6.mpeg 2 This is the output of mediainfo (on Ubuntu): General Complete name : out6.mpeg Format : MPEG-PS File size : 6.86 MiB Duration : 1mn 37s Overall bit rate : 593 Kbps Video ID : 224 (0xE0) Format : MPEG Video Format version : Version 2 Format profile : Main@Main Format settings, BVOP : No Format settings, Matrix : Default Format_Settings_GOP : M=1, N=12 Duration : 1mn 37s Bit rate mode : Variable Bit rate : 122 Kbps Width : 400 pixels Height : 400 pixels Display aspect ratio : 1.000 Frame rate : 29.970 fps Resolution : 8 bits Colorimetry : 4:2:0 Scan type : Progressive Bits/(Pixel*Frame) : 0.025 Stream size : 1.41 MiB (21%) Audio ID : 128 (0x80) Format : AC-3 Format/Info : Audio Coding 3 Duration : 1mn 36s Bit rate mode : Constant Bit rate : 448 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 44.1 KHz Stream size : 5.18 MiB (75%)

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • Google Chrome no longer treats " Web Apps" specially

    - by Adrian Petrescu
    I'm running Google Chrome (Dev Channel), with the --enable-apps flag, in both OSX and Ubuntu. I have four or five WebApps installed, and they appear in the "New Tab" page just fine. The problem is that, before, when the feature first became available in the Dev Channel, the actual tabs hosting the webapps received special treatment; they would have 3D Dock-like look, and (more importantly) the tab bar would be hidden while using that tab. Sometime in the last few weeks, however, it seems that the special treatment just disappeared with one of the daily updates. The webapps still show up in the New Tab page, they still work in the sense that they capture all URLs going to that webapp, and they use the right icons; but they've basically become indistinguishable from just a regularly pinned tab. The two special features mentioned above have disappeared, on both Ubuntu and OS X. My questions are simply: a) Does this happen to anyone else? When exactly did it begin? b) Why did Google regress the feature? c) Is there any flag I can enable to get it back?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >