Search Results

Search found 905 results on 37 pages for 'signals slots'.

Page 26/37 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • 5.1 surround sound

    - by rocker9455
    Ok, So i've always had trouble with enabling 5.1 in ubuntu. Running 'alsamixer': I have: Master, Heaphones, PCM, Front, Front Mi, Front Mi, Surround, Center All are at 100% Card:HDA Intel Chip:Realtek ALC888 (This is my onboard sound, Its a dell studio, with 7.1 integrated sound) Running "speaker-test -c6 -twav" I only get the front 2 speakers (Right/Left) making any noise. The others make no noise at all. I have no other sound card to use as all my PCI slots are used up. Daemon.conf: ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10

    Read the article

  • Anticipating JavaOne 2012 – Number 17!

    - by Janice J. Heiss
    As I write this, JavaOne 2012 (September 30-October 4 in San Francisco, CA) is just over a week away -- the seventeenth JavaOne! I’ll resist the impulse to travel in memory back to the early days of JavaOne. But I will say that JavaOne is a little like your birthday or New Year’s in that it invites reflection, evaluation, and comparison. It’s a time when we take the temperature of Java and assess the world of information technology generally. At JavaOne, insight and information flow amongst Java developers like no other time of the year.This year, the status of Java seems more secure in the eyes of most Java developers who agree that Oracle is doing an acceptable job of stewarding the platform, and while the story is still in progress, few doubt that Oracle is engaging strongly with the Java community and wants to see Java thrive. From my perspective, the biggest news about Java is the growth of some 250 alternative languages for the JVM – from Groovy to Jython to JRuby to Scala to Clojure and on and on – offering both new opportunities and challenges. The JVM has proven itself to be unusually flexible, resulting in an embarrassment of riches in which, more and more, developers are challenged to find ways to optimally mix together several different languages on projects.    To the matter at hand -- I can say with confidence that Oracle is working hard to make each JavaOne better than the last – more interesting, more stimulating, more networking, and more fun! A great deal of thought and attention is being devoted to the task. To free up time for the 475 technical sessions/Birds of feather/Hands-on-Labs slots, the Java Strategy, Partner, and Technical keynotes will be held on Sunday September 30, beginning at 4:00 p.m.   Let’s not forget Java Embedded@JavaOne which is being held Wednesday, Oct. 3rd and Thursday, Oct. 4th at the Hotel Nikko. It will provide business decision makers, technical leaders, and ecosystem partners important information about Java Embedded technologies and new business opportunities.   This year's JavaOne theme is “Make the Future Java”. So come to JavaOne and make your future better by:--Choosing from 475 sessions given by the experts to improve your working knowledge and coding expertise --Networking with fellow developers in both casual and formal settings--Enjoying world-class entertainment--Delighting in one of the world’s great cities (my home town) Hope to see you there! Originally published on blogs.oracle.com/javaone.

    Read the article

  • So Much Happening at Devoxx

    - by Tori Wieldt
    Devoxx, the premier Java conference in Europe, has been sold out for a while. The organizers (thanks Stephan and crew!) cap the attendance to make sure all attendees have a great experience, and that speaks volumes about their priorities. The speakers, hackathons, labs, and networking are all first class. The Oracle Technology Network will be there, and if you were smart/lucky enough to get a ticket, come find us and join the fun: IoT Hack Fest Build fun and creative Internet of Things (IoT) applications with Java Embedded, Raspberry Pi and Leap Motion on the University Days (Monday and Tuesday). Learn from top experts Yara & Vinicius Senger and Geert Bevin at two Raspberry Pi & Leap Motion hands-on labs and hacking sessions. Bring your computer. Training and equipment will be provided. Devoxx will also host an Internet of Things shop in the exhibition floor where attendees can purchase Arduino, Raspberry PI and Robot starter kits. Bring your IoT wish list! Video Interviews Yolande Poirier and I will be interviewing members of the Java Community in the back of the Expo hall on Wednesday and Thursday. Videos are posted on Parleys and YouTube/Java. We have a few slots left, so contact me (you can DM @Java) if you want to share your insights or cool new tip or trick with the rest of the developer community. (No commercials, no fluff. Keep it techie and keep it real.)  Oracle Keynote Wednesday morning Mark Reinhold, Chief Java Platform Architect, and Brian Goetz, Java Language Architect will provide an update on Java 8 and beyond. Oracle Booth Drop by the Oracle booth to see old and new friends.  We'll have Java in Action demos and the experts to explain them and answer your questions. We are raffling off Raspberry Pi's each day, so be sure to get your badged scanned. We'll have beer in the booth each evening. Look for @Java in her lab coat.  See you at Devoxx! 

    Read the article

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Intel 1.83 Mac Mini upgraded to 1.5 gig for Snow Leopard

    - by Paula
    Even though I've upgraded countless video cards, RAM, harddrives, motherboards in PCs... this will be my first mac mini RAM upgrade. I've watched the classic "putty knife" video. (Absurd method... but I guess it's what I'm stuck with.) I have a 1.83 Intel-based Mac Mini from 2007-2008, with 1 gig of RAM. (Two 512 sticks) Can I install 1 gig + 512 ? (Or do I have to throw away my existing sticks and buy two 1 gig sticks?) This old machine is rarely used... so I want to spend the absolute minimum on this RAM upgrade. We ONLY use it to run xCode... nothing more. But wanted to increase the RAM so we can install Snow Leopard. I have no idea how many pins the memory has. I printed out over FORTY pages of specs about this machine from "About this Mac"... but didn't find what I needed. Does this sound right: DDR2 SDRAM (but no mention of SO-DIMM) 667MHZ (but don't know if I can use faster also) Pin count: Unknown Computer model number: Unknown (But I "think" it's an MB138/A) PC2 RAM (unknown... not mentioned) 5300 (unknown... not mentioned) Mfg date: (unknown... not mentioned) Number of slots: (unknown... not mentioned) Laptop or desktop RAM: (unknown... not mentioned)

    Read the article

  • Error In centos6 while compiling java classes,in tomcat6

    - by AJIT RANA
    I am newbie to Linux and Centos6. I bought server just now and want to deploy my web app in it. I am getting error while I am compiling my servlet classes. It showing me bash: javac: command not found when I try to compile my classes. But when I checked my class in '/usr/lib/jvm/java-1.6.0/bin .'I found my javac there. Then I checked javac with the help of command ./javac i got ERROR.. [root:ip_address.com]# ./javac There is insufficient memory for the Java Runtime Environment to continue. pthread_getattr_np Error occurred during initialization of VM java.lang.OutOfMemoryError: unable to create new native thread I followed the step as shown in "" Java outofmemoryerror when creating <100 threads "" which shows me command to get limits.. '[root:ipaddress.com]# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 278528 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited' [root:ipaddress.com]# top bash: top: command not found link :- http://stackoverflow.com/q/12913857/1746764

    Read the article

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • eMachine W5243 won't POST; fans run but optical drive will not open.

    - by NicciAdonai
    Symptoms are what is described in the title. The machine reacts to the power button being hit by spinning up the two fans: CPU and PSU. The hard drive (SATA) spins up as well. No other reaction. This one symptom is particularly weird, though: the optical drive will not open with the IDE cable attached, but if I unplug it from the mobo it will. I can turn the PC on with it attached, won't open; then unplug IDE while it is still on, WILL open; then plug IDE back in with the PC STILL ON, WON'T open. I have disconnected every peripheral unnecessary to POST. These include: mouse/keyboard, PCI modem, the IDE optical drive (power and data), and the SATA HDD (power and data). Video is onboard. The only two things connected are DB15 video and power cable. There were 2 512 MB DDR2 sticks of RAM in it. I have tried running it with just one of them, then switched the other in. Currently seated is a completely different 1 GB stick that I keep around for troubleshooting purposes, and I have tried it in both slots. I have replaced the CMOS battery with a used one I had lying around, and which worked in the computer it came out of. I have tested the PSU with a tester to confirm it was good, then tried connecting another PSU just in case--same symptoms. I have even tried a suggestion I found elsewhere on this site wherein one disconnects power from the PSU and then presses the PC's power button twice, thereby "resetting" the PSU. Currently I am trying yet another suggestion: turn it on and wait an inordinate amount of time for POST. Any help would be appreciated.

    Read the article

  • What is the best keyboard for typing speed (not layouts)

    - by Gapton
    So I am a programmer, and I like playing typing speed games. My typing speed is, for common English words, 85 to 90 wpm, max 95. I type on various devices, my laptop, desktop, office pc.... they all have slightly different keyboards. Being a curious programmer, I wonder what types of keyboard is used for the highest possible typing speed. Or let me phrase it in another way, what is the type of keyboards that people use in typing speed contest? Here is something I know that I feel like I can share: It must be a wired keyboard, I can feel the lag as I am typing this on my wireless keyboard, even if it is a slightly more expensive model which claims to have zero lag. I know people prefer a mechanical keyboard, for the hepatic feedback, however I have not tried one. It lasts longer and is noisy, it also does not have the problem of normal keyboards where you press many keys at a time the signals will get all jammed and the computer will only receive one or two keys. I personally prefer those "thin profile" keyboards. I type a lot, and 95 wpm put me in the top 5%, this is of course just on a gaming site. However when I type on the fat keyboards, my fingers have to travel a much longer distance before the keys actually click. This is where I find myself typing much faster with those thin profile keyboards found on my laptop. Because my fingers only hover on the keys and I only need to press a short distance, each stroke takes less force and light rapid strokes is what makes me type fast. When I type on a fat keyboard, I was forced to use heavy strokes, and this slows me down. There must be some people out there who are keyboard scientists, who actually do experiments and user tests with different setups. It would be interesting to understand more about the things we use everyday for not just work but a majority of our communications. P.S. this is about hardware and not about switching keyboard layouts to dvorak

    Read the article

  • What DBus signal is sent on system suspend?

    - by Paul Robinson
    Hello, I need to detect when a machine is going to sleep in Ubuntu 9.10 and Fedora 13. Both use UPower, so I've been looking on the "org.freedesktop.UPower" DBus bus for such signals. I've been listening for the "sleeping" signal on the UPower bus with the following command: dbus-monitor --system "type='signal',interface='org.freedesktop.UPower',member='Sleeping'" When I sleep the machine (either by closing the lid, selecting "shutdown - suspend" or sending a DBus message) I don't see a "sleeping" event. I notice that the "Sleeping" event is sent when the "org.freedesktop.UPower.AboutToSleep" method is invoked. I can do this manually by calling: dbus-send --print-reply --system --dest=org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower.AboutToSleep And I notice the "sleeping" signal is fired. My understanding is that anything that sleeps the PC must send the "AboutToSleep" signal before hand. It doesn't seem like this is happening. I've tried these steps on both Fedora 13 and Ubuntu 9.10 and I see the same results. Can anyone explain what's happening or provide me with an alternative DBus signal to listen for? Many thanks, Paul

    Read the article

  • Troubleshooting Mid 2007 iMac RAM upgrade

    - by MDT
    I am trying to install new RAM in my friends iMac, something I have done several times before. We unplugged the computer before performing the upgrade, used anti static wrist bands, and yes the memory is compatible and inserted correctly. The stock RAM was Hynix 1gb pc2-5300s-555-12 and the memory we are replacing it with is 2x2gb Centon CMP800SO2048.01. Now I know this model number suggests that the ram is 800MHz and the iMac is only 667MHz but it clearly states on the box that this RAM is PC2-5300 667MHz compatible. The problem is, that when I install the new RAM I get little response from the computer. I hear the hard drive and disk drive start to initialize, but then they just stop and the screen remains black. I have tried every variation of the new RAM and the old RAM in both slots and even tried the same RAM from my old iMac and I just can't get it to boot. Has anyone ever had a problem like this? Thank you in advance for any and all input on thus issue!

    Read the article

  • 613 threads limit on debian

    - by Joel
    When running this program thread-limit.c on my dedicated debian server, the output says that my system can't create more than around 600 threads. I need to create more threads, and fix my system misconfiguration. Here are a few informations about my dedicated server: de801:/# uname -a Linux de801.ispfr.net 2.6.18-028stab085.5 #1 SMP Thu Apr 14 15:06:33 MSD 2011 x86_64 GNU/Linux de801:/# java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) de801:/# ldd $(which java) linux-vdso.so.1 => (0x00007fffbc3fd000) libpthread.so.0 => /lib/libpthread.so.0 (0x00002af013225000) libjli.so => /usr/lib/jvm/java-6-sun-1.6.0.26/jre/bin/../lib/amd64/jli/libjli.so (0x00002af013441000) libdl.so.2 => /lib/libdl.so.2 (0x00002af01354b000) libc.so.6 => /lib/libc.so.6 (0x00002af013750000) /lib64/ld-linux-x86-64.so.2 (0x00002af013008000) de801:/# cat /proc/sys/kernel/threads-max 1589248 de801:/# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 794624 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 128 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Here is the output of the C program de801:/test# ./thread-limit Creating threads ... Address of c = 1061520 KB Address of c = 1081300 KB Address of c = 1080904 KB Address of c = 1081168 KB Address of c = 1080508 KB Address of c = 1080640 KB Address of c = 1081432 KB Address of c = 1081036 KB Address of c = 1080772 KB 100 threads so far ... 200 threads so far ... 300 threads so far ... 400 threads so far ... 500 threads so far ... 600 threads so far ... Failed with return code 12 creating thread 637. Any ideas how to fix this please ?

    Read the article

  • Automatically install driver on headless WHSv1 system

    - by Dan Neely
    I have one of the HP Mediasmart Windows Home Server v1 boxes. It's network port appears to have died a few days ago but the system is not giving any other sign of failure: No activity lights activate on either side of the cable when connected to my gigabit switch; when connected to one of my routers 100 megabit ports the lights turn on but it remains unreachable over the network and my router never lists it as among DHCP clients. I bought a USB-ethernet adapter to temporarily get it back online; but the adapter needs a driver to work which I can't install because the system is headless by design (no video out, no PCI/PCIe slots) with admin access only available via the WHS client or remote desktop. Both of those options require network connectivity and are consequently unavailable. I tried copying the drivers to a flash drive; but Windows either didn't look there or none of the drivers provided were suitable (Win8, Win7, or combined XP and Vista). I've been told that a USB WiFi adapter would have the same driver problem.

    Read the article

  • Apple video adapters: Mini-DVI to DVI + DVI to video?

    - by Ken
    I have: a Macbook (which has Mini-DVI output) an Apple Mini-DVI to DVI adapter (so I can hook up my Macbook to my DVI LCD) a Mac mini (which has DVI output) I see that I can get a DVI-to-Video (s-video and composite) adapter from Apple that will let me hook up my Mac mini to my TV set (which has only component, s-video, and composite -- nothing digital). So far so good. Question: Will that same adapter also let me hook my Macbook to my TV? That is, can I hook Macbook - Mini-DVI to DVI - DVI to video - TV set, and see picture? I know there are digital/analog/integrated variants of DVI, and it's not at all clear to me what pins these things have and what signals they're sending. One website I found suggested that it would work, and another suggested that it wouldn't even physically connect. So I'm looking for, ideally, someone who's actually tried it, or has these adapters sitting around to try. I know I can buy 2 adapters (Mini-DVI to Video, and DVI to Video) to do this, but at $20 a pop I'll avoid that if at all possible.

    Read the article

  • Are HDMI to VGA Adapters Really Device-Specific?

    - by allquixotic
    There are a lot of devices on the market right now (especially mobile devices) with a Micro-HDMI or Mini-HDMI port and no VGA or D-Sub output. Most manufacturers of said devices sell a cable that looks something like this: I have yet to find a cable like this that claims to work on a wide array of devices. In general, these cables claim to work with one specific device only. The way these cables work, I think, is that analog VGA signals are sent from the HDMI port on the device. This should work for devices that have special hardware on the motherboard/GPU capable of driving this. Is it the case that these cables have to be custom designed for each device? Or, is it rather that any device which possesses this special "signaling of analog VGA over the HDMI port" can be made to work with a cable that is physically compatible (i.e. the HDMI end plugs into the device and the VGA end accepts a VGA monitor cable)? Note that I am not looking for a product recommendation, just a conceptual clarification on what exactly these devices are doing. Also, a few remarks: The cables like the one depicted here are not digital to analog converters. I know about these: they are expensive, and they are the ONLY solution if your device only outputs a digital signal and is incapable of driving analog VGA over the HDMI port. The cables like the one depicted here are not straight crossover cables from VGA to HDMI, either. The crossover cables are designed to send a digital HDMI signal over the VGA port's wires; that is, the wire protocol is HDMI (digital) but the physical pinout is the same as VGA, even though nothing analog is happening. Once again, this is not the behavior that, I believe, the devices which I'm talking about in this question are doing. The cabling and devices that this question is about transmit the analog VGA data over the HDMI port (the HDMI port is in the device outputting the data, and the VGA side is the monitor/projector).

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • IPMI sdr entity 8 (memory module) only showing 3 records?

    - by thinice
    I've got two Dell PE R710's - A has a single socket and 3 DIMMs in one bank B has both sockets and 6 (2 banks @ 3 DIMMs) filled The output from "ipmitool sdr entity 8" confuses me - according to the OpenIPMI documentation these are supposed to represent DIMM slots. Output from A (1 CPU, 3 DIMMS, 1 bank.): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 27 degrees C Temp | 0Bh | ns | 8.1 | Disabled Temp | 0Ch | ucr | 8.1 | 52 degrees C Output from B (2 CPUs, 3 DIMMS in both banks, 6 total): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 26 degrees C Temp | 0Bh | ok | 8.1 | 25 degrees C Temp | 0Ch | ucr | 8.1 | 51 degrees C Now, I'm starting to think this output isn't DIMMS themselves, but maybe a sensor for each bank and something else? (Otherwise, shouldn't I see 6 readings for the one with both banks active?) The CPU's aren't near 50 deg C, so I doubt the significantly higher reading is due to proximity - Is anyone able to explain what I'm seeing? Does the output from my ipmitool sdr entity 8 -v here on pastebin seem to hint at different sensors? The sensor naming conventions are poor - seems like a dell thing. Here is output from racadm racdump

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • How can I get my SATA DVDs working again?

    - by user269051
    My hard drive crashed (WinXPpro), so I took a C drive from a broken PC. The new C drive is Win7pro. Motherboard is MSI K8N Neo4 Platinum, with 4 hard drives installed on SATA 1-4 (nForce4 Ultra); the two DVD drives are loaded on SATA 7-8 (Silicon Image SATARAID5). I've tweaked BIOS settings every which way. The closest thing to success was when each DVD had both a CD and a DVD icon, and blinked green. No CD or DVD could be read in either drive. I assume that the problem resulted from the fact that my new C drive does not have the RAID drivers? I've tried loading from the floppy (doesn't work). I can't boot off the DVD/CD, and switching the DVD's SATA cable to the SATA 3 slot (and pulling one of the hard discs) didn't work. I'd like to be able to use the other two available SATA slots for a mirrored RAID drive, and get my DVDs working again. Any suggestions?

    Read the article

  • Why does just splitting an Ethernet cable not work?

    - by Sin Jeong-hun
    I thought the Ethernet is logically a one-line communication bus (for argument's sake, I am excluding hubs). All machines attached on the bus hears the same signals and the machines themselves try to avoid collisions by randomly backing off. http://computer.howstuffworks.com/ethernet6.htm If so, why would splitting one Ethernet line from my home router into two and connecting two computers not work? Why do I have to add a switch to it? *What the Internet said would not work. [4 port home router] ------[one Ethernet cable]-----[simple splitter]======[two computers] *What the Internet said I should do [4 port home router] ------[one Ethernet cable]-----[switch]======[two computers] Is this because of the signal degradation (reduced electric current)? Thank you for all the answers! The reason why I did not just use the two ports of my home router is... The 4-port gigabit router is in my room, and I had put a computer in another room (also my room, though). Since a wired network is far more reliable and secure, I had bought a long Ethernet cable and and connected the computer to the router. Now I was thinking about adding another computer to that room. I could buy another long Ethernet cable, but then there will be two cables between the rooms. The one line already is a minor annoyance, so I thought if I could share the one line between the two computers in that room. A switch would work, but it requires power and is a little bit pricey. That is why I wondered why it would not work to simply split the physical Ethernet cable. Apparently I do not completely understand how Ethernet and a switch work. I just have some bit of knowledge I heard in my college class.

    Read the article

  • Hardware, network infrastructure for runnng gaming server nd on VirtualGL

    - by archer
    Foud nice project VirtualGL (http://www.virtualgl.org/). Tried to run 3D fames (EVE Online, Prototype) on server and display the output on thin client using 100Mbps network. Server: Gentoo Linux on AMD Phoenom II x6 3.4Gz, 8GB RAM, 2x NVIDIA 9800 GTX in single session with display resulution 1024x768 on client. Performance is very promising. Going to increase network speed to 1Gbps (using either Ethernet or Fiber) and run 5-6 clients simultenously. My questions are: a) what would be better for network - 1Gbps Ethernet or Fiber (clients are distributed in max 20m around server)? Is that a must to use managed switch for better network performance? b) Should I increase number of video cards to put in SLI on server (going to use Gigabyte GA-890FXA-UD7 which has 6 PCIExpress slots [2 x4, 2 x8 and 2 x16]). Will it impact performance significantly. If I need to increase the number of video cards - what would be better - put 2 banks of video cards with 3 in bank using SLI, or 3 banks with 2 in the bank? Would linux recognize that and properly use all banks of video cards? c) any suggestions on good thin clients supporting 1920x1080 HDMI video and 1Gbps network I understand that my questions can't be answered clearly (unless someone already managed to use this kind of stuff ;)) although any suggestions would be very helpful.

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Force spin-down of external hard-drive on linux (raspberry pi)

    - by user258346
    I'm currently setting up a home-server using a Raspberry Pi with an external hard-disk connected via usb. However, my hard-drive will never spin down when being idle. I tried already the hints provided at raspberrypi.org ... without any success. 1.) sudo hdparm -S5 /dev/sda returns /dev/sda: setting standby to 5 (25 seconds) SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2.) sudo hdparm -y /dev/sda returns /dev/sda: issuing standby command SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...and 3.) sudo sdparm --flexible --command=stop /dev/sda returns /dev/sda: HDD 1234 ... without spin-down of the drive. I use the following hardware: Inateck FDU3C-2 dual Ports USB 3.0 HDD docking station Western Digital WD10EZRX Green 1TB Is it possible, that the sent spin-down-signals are somewhere overwritten/lost/ignored?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >