Search Results

Search found 4396 results on 176 pages for 'low poly'.

Page 7/176 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Ultra Low Latency Linux Distribution or Kernel

    - by Zanlor
    I'd like to know if there are any linux distributions that are focused on low latency networking. The area I'm working in is algorithmic trading, and extremely low latency comms between machines is a must. The current h/w we're using is 10g ethernet, we're looking into things like infiniband RDMA and Voltaire VMA I've googled around, and have only been able to find tidbtits of kernel patches, command line options and hardware suggestions. I'm looking for a complete solution, specially built kernel, kernel bypass features, essentially all the goodies rolled up into one package - does such a thing even exist? I ask as a lot of this stuff seems to be a black art, people keep secret what they know works etc.

    Read the article

  • Ultra Low Latency Linux Distribution or Kernel

    - by Zanler
    I'd like to know if there are any linux distributions that are focused on low latency networking. The area I'm working in is algorithmic trading, and extremely low latency comms between machines is a must. The current h/w we're using is 10g ethernet, we're looking into things like infiniband RDMA and Voltaire VMA I've googled around, and have only been able to find tidbtits of kernel patches, command line options and hardware suggestions. I'm looking for a complete solution, specially built kernel, kernel bypass features, essentially all the goodies rolled up into one package - does such a thing even exist? I ask as a lot of this stuff seems to be a black art, people keep secret what they know works etc.

    Read the article

  • Dock displays low-resolution icons

    - by squircle
    Recently, I've noticed that the dock has been starting to display low-resolution icons in place of the former high-resolution icons for common apps like Stickies, Word, iTunes and Preview. Looking at the .icns file within each program, all copies of the icon are present within the file (high and low resolutions), but the dock refuses to display them, leaving some programs looking like this: Restarting doesn't stop this behaviour, nor does a killall Dock, nor removing the icon and replacing it in the dock. In Finder, the icons display normally. Does anybody know what may be causing this issue? Thanks!

    Read the article

  • APC and "apc.gc_ttl" :: How Low is Too Low?

    - by nojak
    I currently have my apc.gc_ttl set to 600 to help keep fragmentation down. Since apc.gc_ttl just sets the time on cache for garbage collection, I don't see any harm in keeping it this low. However, I'm new to APC, and have seen many configurations online that use a 3600 TTL, which seems quite long to me for garbage collection cache... Is 600 too low? Is 3600 too high? As I'm sure mileage varies on this setup, is there a good rule of thumb to follow?

    Read the article

  • Link between low level drivers and tty drivers

    - by agent.smith
    I was writing a console driver for linux and I came across the tty interface that I need to set up for this driver. I got confused as to how tty drivers are bound with low-level drivers. Many times the root file system already contains a lot of tty devices. I am wondering how low-level devices can bind to one of the existing tty nodes on the root file system. For example, /dev/tty7 : Node on the root file system. How does a low-level device driver connect with this node? Or should that low-level device define a completely new tty device?

    Read the article

  • Replacement for C low level programming?

    - by Sauron
    So C obviously has a pretty dominant low level programming stronghold.....but is anything coming out that challenges/wants to replace it? Python/C#/etc all seem to be aimed at very high level, but when it comes down to nitty-gritty low level stuff C seems to be king and I haven't seen much "try" to replace that? Is there anything out there, or does learning C for low level stuff seem to be the standard?

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • What next generation low level language is the best bet to migrate the code base ?

    - by e-satis
    Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago. For now, C/C++ runs more than fine and there is plenty dev on the market for it. But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams. You have heard about D, starting to be quite mature, and Go, promising to be quite popular. What would be your choice and why?

    Read the article

  • Dual Xeon Server voltages are low

    - by Mindflux
    I've got a whitebox server running CentOS 5.7. It's a Dual Xeon 5620, 24GB of RAM. The mainboard is a SuperMicro X8DT6-F and the chassis is a SC825TQ-R720LPB. Dual 720W Power supplies. We had a big power outage a couple weeks back that took down everything, I don't have any pre-power outage figures for this server, and the only reason I noticed these is because when I was bringing up the servers I was checking them out with more scrutiny than usual. http://i.imgur.com/rSjiw.png (Image of voltage readings) As you can see, CPU1 DIMM is low, +3.3V is high, 3.3VSB is high, +5v is high, +12v is REAL LOW (out of normal 5% (plus/minus))... and VBAT is off the charts. With my whitebox VAR we've tried the following: Swap out PSU with another server I have with the same PSUs. Try different power cord Update BMC/IPMI firmware in case readings were wrong (They aren't) Update BIOS Try different PDU Try a different outlet and/or circuit Replaced Voltage Regulator Unit At this point, the only thing we haven't done, seemingly is replace the mainboard.. which is what the next step will be unless something else shines some light on the situation. I should mention the system is rock solid otherwise which is a surprise given the 12v voltage is that far off.

    Read the article

  • HyperV - low CPU usage

    - by Klark
    I am very new to HyperV and virtual machine philosophy in general, so please expect more or less nooby questions :) I have a server that is only used as a host for virtual machines. OS is windows server 2008 R2 and it is running on 16 CPU and 48 GBs of RAM. On aforementioned server there are 8 VMs, each having 4 CPUs and 4 GBs of RAM. On those VMs we are running some CPU intensive tasks. Each machine has nearly 100% cpu usage. After I noticed slow performance I went to the host machine and started playing with process explorer. It turned out that cpu usage is very low. Also I/O is very low, and of course, memory consumption is high, which is expected. Of course, I don't expect that those 4 virtual cores dedicated to a VM work as fast as real, hardware 4 cores, but still I expected a higher consumption of real hardware. Is this sort of behaviour normal? I see that the most of CPU usage on host machine are marked as interrupts (which I guess is normal) and all those interrupts are passed to only one core (which is strange). Are there out of box optimization that I could perform to finally use all that processing power that is under the hood. My knowledge of virtualization technology is near to embarrassing, so I would be grateful for any links that could enlightened me :) Thanks.

    Read the article

  • wireless router - configuring for low-latency, high traffic environment

    - by Mark C
    Hey all, I have a few questions about configuring a router to achieve low-latency, high speed throughput on a local area network that is not connected to the internet. I've read up on some stuff, but thought I would solicit some opinions here on what I've found and what I want to know.... Turn off SSID broadcast - it produces extraneous packets that all clients receive and reply (?) to. Not a huge deal, but it may help a bit. Mixed-mode off - I should attempt to have all devices using the same standard (e.g. 802.11n) and turn mixed-mode off. Any thoughts on security? Does having WEP or any of the WPA variants actually increase latency? Nothing super secure is going over this LAN so if turning security off made things better, that'd be cool. Any other thoughts or things to focus on to create the low latency environment I'm trying to go for would be great. Links to webpages and papers are also cool. I'm open to go through a bunch of stuff. Thanks in advance!

    Read the article

  • Checking out systems programming, what should I learn, using what resources?

    - by Anto
    I have done some hobby application development, but now I'm interested in checking out systems programming (mainly operating systems, Linux kernel etc.). I know low-level languages like C, and I know minimal amounts of x86 Assembly (should I improve on it?). What resources/books/websites/projects etc. do you recommend for one to get started with systems programming and what topics are important? Note that I know close to nothing about the subject, so whatever resources you suggest should be introductory resources. I still know what the subject is and what it includes etc., but I have not done systems programming before (but some application development, as previously noted, and I'm familiar with a bunch of programming languages as well as software engineering in general and algorithms, data structures etc.).

    Read the article

  • How do you limit root partition disk access to allow drive to go into stanby mode?

    - by Casey
    When there are no users on my system, I would like the hard disk to spindown to low-power state. I realize that this might not be 100% achievable for a straight 24 hours, but it seems reasonable that the system could remain idle for a few hours at a time when it is not in use. My system is headless and running a limited number of services. The primary services are: exim4, mythtv-backend, nfs, samba, cups, apt-cacher-ng Assume that drives are already enabled to go into standby mode. Also, its not acceptable to increase the write-back timeout, since my system is not on a UPS.

    Read the article

  • Using Hidden Markov Model for designing AI mp3 player

    - by Casper Slynge
    Hey guys. Im working on an assignment, where I want to design an AI for a mp3 player. The AI must be trained and designed with the use of a HMM method. The mp3 player shall have the functionality of adapting to its user, by analyzing incoming biological sensor data, and from this data the mp3 player will choose a genre for the next song. Given in the assignment is 14 samples of data: One sample consist of Heart Rate, Respiration, Skin Conductivity, Activity and finally the output genre. Below is the 14 samples of data, just for you to get an impression of what im talking about. Sample HR RSP SC Activity Genre S1 Medium Low High Low Rock S2 High Low Medium High Rock S3 High High Medium Low Classic S4 High Medium Low Medium Classic S5 Medium Medium Low Low Classic S6 Medium Low High High Rock S7 Medium High Medium Low Classic S8 High Medium High Low Rock S9 High High Low Low Classic S10 Medium Medium Medium Low Classic S11 Medium Medium High High Rock S12 Low Medium Medium High Classic S13 Medium High Low Low Classic S14 High Low Medium High Rock My time of work regarding HMM is quite low, so my question to you is if I got the right angle on the assignment. I have three different states for each sensor: Low, Medium, High. Two observations/output symbols: Rock, Classic In my own opinion I see my start probabilities as the weightened factors for either a Low, Medium or High state in the Heart Rate. So the ideal solution for the AI is that it will learn these 14 sets of samples. And when a users sensor input is received, the AI will compare the combination of states for all four sensors, with the already memorized samples. If there exist a matching combination, the AI will choose the genre, and if not it will choose a genre according to the weightened transition probabilities, while simultaniously updating the transition probabilities with the new data. Is this a right approach to take, or am I missing something ? Is there another way to determine the output probability (read about Maximum likelihood estimation by EM, but dont understand the concept)? Best regards, Casper

    Read the article

  • OCR for low quality images

    - by dassouki
    Unfortunately, this is not CSI, we collected 20,000 images for license plates; we were wondering if there is a semi reliable way to read those license plates using OCR. the images especially night time ones are extremely low quality.

    Read the article

  • Mobile CPU vs. Ultra-low CPU: performance

    - by Mike
    I'm choosing a new laptop and one of the questions is a type of CPU — mobile or ultra-low voltage. If to be more precise, I'm torn between two models of Intel Core i5 — i5-2410M and i5-3317U. Here is a comparison table. According to official specs the first-one has 2.3 GHz clock speed, while the second-one has only 1.7 GHz, that's about 25% difference. Is it really important parameter and which CPU is more preferable for a laptop for development, media and internet purposes?

    Read the article

  • Low CPU usage but High fan speed

    - by dhasu
    The problem I am facing is the CPU fan rotates with high speed even though the CPU usage is very low, less than 10%. I even cleaned the Dust off the Fan, I replaced the Thermal Material on the Processor. Installed an Extra Fan. Nothing seems to Work. Please Help.

    Read the article

  • Why is network utilization so low

    - by dean20007
    My network utilization in windows never seems to get above 1%. This seems absolutely tiny, does anyone know why it is so low and if there is anyway to increase it(or if it indeed does need increased) FYI: I use a D-LINK USB wireless adaptor

    Read the article

  • Installed ASUS HD4670, now unable to install ANY Debian due to low memory corruption

    - by Alfabravo
    I have a desktop PC which initially had the Intel D946gzis mobo, its chipset as video controller, some RAM and so. There I installed Debian without a problem alongside WindowsXP. I've bought an ASUS HD 4670 video card, installed it on the PC and now the installed Debian does not work, while the Ubuntu live CD refuses to run no matter if I set acpi, apic on or off... it throws me some low memory corruption at position just like shown here. With normal configuration, Debian throws kernel panic (keyboard lights blinking). Anyone have faced this before? Ideas? Thanks!! (meanwhile, debian hides in a virtualbox :'( ) Edited: Tried Ubuntu 9.10 x64 (due to the fact i've a core2duo at 2GHz) and it throws a kernel-panic to me (flashing caps and num LEDs). On screen, can be read different lines with things like: ... [ 1.957161] [] rb_erase+0xd6/0x160 [ 1.957266] [] page_fault+0x25/0x30 Could it be something about this new video card having ddr3?

    Read the article

  • Good low-cost SSL certificate providers

    - by phenry
    We need an SSL certificate to facilitate remote access and administration by a small number of employees. I don't want to have to train a bunch of non-technical users to install a self-published cert on their home computers, so I'd prefer to purchase one from a well-trusted provider. We won't be using it for any kind of e-commerce or things like that, so it seems hard to justify paying the prices demanded by some of the big-name providers. Who are some good low-cost providers to consider? What are the important differences between the offerings that are available at different price points? (And is the certificate business really as much of a racket as it seems?)

    Read the article

  • After low level formatting can microsoft track previously pirated windows installed on pc

    - by Neelabh
    I am getting call from Microsoft and they are forcing me to purchase so many licensed software but my budget is not that much.. So they are asking for On-Site Audit (SAM Review)... So I did low level formatting of my All PC's and Installed Ubuntu. So can they track I installed pirated windows xp earlier on these system or I need to change hardware.. After formatting on what parameter Microsoft Track earlier piracy: 1) By any Harddisk ID 2) By any Motherboard ID 3) By any IP Address Please help me otherwise I have to borrow so much money for licensing fee. Thanks in Advance..

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >