Search Results

Search found 6172 results on 247 pages for 'limit choices to'.

Page 46/247 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Where is the bottleneck?

    - by jsymon
    There is a limit on connections somewhere along the line here... On a windows server 2008 machine, each request to a url running on localhost takes ~3 seconds to complete. This is fine and normal for the url. However, if i open the same localhost url in about 10 tabs, and set them to reload all at the same time, they finish sequentially, 3 seconds after each other. Meaning the last tab has taken 30 seconds to load (3s x 10). What is especially odd is that firebug reports each page as taking 3seconds to load. Another point to add is that the status bar just sits at 'done' for the last tab until 3 seconds before completing, where it then changes to 'waiting for localhost'. I am praying there is some connection limit somewhere otherwise this would be a disaster if more than one user ever visited the site at a time! Maybe a limit or something where one pc cant make more than 2 simultaneous requests to a url at a given time?

    Read the article

  • How to legitimately work around ISP rate limits

    - by Derek Ting
    A lot of ISP rate limit the amount of e-mails that is sent from a particular IP address. What is the proper way to get around that rate limit? Our company has an iPhone application that sends many e-mails because of our large user base and many e-mails go to different ISPs that rate limit the number of messages coming from a specific IP. We do not send spam and we are a legitimate business. However, is there a better way to resolve this limitation rather than just getting a ton of IP addresses? Ideally, I wouldn't want to rely on a third party service to send mail. However, if its the only possible solution, we would consider.

    Read the article

  • 421 Concurrent Connections - Ratelimit from helpdesk to rackspace server

    - by g18c
    We have Kayako helpdesk running on our WHM Linux server. When e-mails come in from customers, notifications are sent out by Kayako to a number of staff whose mailboxes are hosted on Rackspace mail servers. I noticed a large queue in the Exim queued message viewer of WHM - when looking in Exim logs I can see many lines 2012-10-13 20:06:56 1TN72s-0007Cw-1l SMTP error from remote mail server after initial connection: host mx2.emailsrvr.com [173.203.2.32]: 421 Too many concurrent connections from this client. One client email results in about 5 emails to rackspace servers, perhaps 60 emails per 1 hour on average - not a huge amount but enough to cause messages to be rejected when sent in short bursts. In this case ideally if we can limit the connections sent to the rackspace server we can comply with their limit. For our requirements if we send 1 email every10 seconds or so, this would be OK. Messages to all other servers should go through a normal rates, only mx1.emailsrvr.com and mx2.emailsrvr.com should have this connection limit policy applied. Is this possible?

    Read the article

  • Mail server hammering

    - by Rodrigo
    I've noticed a quick increase on smtp connections coming to my server, investigating it further i figured out that there's a botnet hammering my smtp server. I've tried to stop it by adding a rule at iptables: -N SMTP-BLOCK -A SMTP-BLOCK -m limit --limit 1/m --limit-burst 3 -j LOG --log-level notice --log-prefix "iptables SMTP-BLOCK " -A SMTP-BLOCK -m recent --name SMTPBLOCK --set -j DROP -A INPUT -p tcp --dport 25 -m state --state NEW -m recent --name SMTPBLOCK --rcheck --seconds 360 -j SMTP-BLOCK -A INPUT -p tcp --dport 25 -m state --state NEW -m recent --name SMTP --set -A INPUT -p tcp --dport 25 -m state --state NEW -m recent --name SMTP --rcheck --seconds 60 --hitcount 3 -j SMTP-BLOCK -A INPUT -p tcp --dport 25 -m state --state NEW -j ACCEPT That would avoid them from hammering "too fast", however the problem still, there's like 5 tries per second, it's going insane, i had to incrase the maximum number of childs of sendmail/dovecot. There's too many ips to filter out manually and simply changing the smtp to another port is not practical since i got many other clients on that server. I'm using sendmail with dovecot, any ideas to have this filtered out more efficiently?

    Read the article

  • Why isn't "(ulimit -d 1000; firefox) &" working?

    - by MaxB
    I'm trying to limit the memory usage of firefox to prevent it from stalling the whole system with problematic web sites. I tried, in bash: (ulimit -d 1000; firefox) & This should limit the memory usage to 1000kB. Then I opened YouTube, and noticed, in top, that firefox is using 2.6% of the memory, or about 200MB, and not crashing. Clearly the limit is being ignored. Why is that, and how can I enforce it correctly?

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • update list selector dynamically in webos

    - by Aswan
    Hi i want to update list selector items dynamically .i am set the list selector widget like following this.ConversionToNumaric= [ {label:$L('One'), value:"1", secondaryIcon:''}, {label:$L('two'), value:"2", secondaryIcon:''}, {label:$L('three'), value:"3" , secondaryIcon:''} ] this.controller.setupWidget('listSelectorConversionToNumaric', {labelPlacement:'left',label: $L('To'), choices: this.ConversionToNumaric, modelProperty:'currentConversionToNumaric'}, this.selectorsModel); the above code i am using for setup the widget this.ConversionToNumaric= [ {label:$L('four'), value:"4", secondaryIcon:''}, {label:$L('five'), value:"5", secondaryIcon:''} ] ] this.currentConversionToPower.choices=this.ConversionToNumaric; this.controller.modelChanged(this.currentConversionToNumaric); what mistake i made here i don't know but it is not updating please help me

    Read the article

  • What are the parental controls within Windows 8 and how do I use them?

    - by KronoS
    I've got some little ones that I want to be able to use my PC, BUT I don't want them using my account since it's an admin account. I've created a user account for them without admin privileges and now I'm looking to see if there is a way to do the following: Prevent them from downloading/purchasing Metro apps Limit amount of time on Computer Limit time of day they can access Limit internet browsing based on age Prevent them from installing desktop applications Any other parental controls that I can set I'm looking for a good exhaustive overview of the parental controls found within Windows 8 and a brief synopsis on how to use those tools.

    Read the article

  • Apache 2 UserDir for only one VirtualHost

    - by dentarg
    Is it possible to enable the UserDir Directive for just one VirtualHost rather than have it on for all and then disable it (with "UserDir disable") for each VirtualHost you don't want it on? I have tried by putting this inside a <VirtualHost> and comment out everything in the global config (/etc/apache2/conf.d/userdir.conf). No luck though. <IfModule mod_userdir.c> UserDir public.www UserDir disabled root <Directory /home/*/public.www> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule>

    Read the article

  • How do ulimit -n and /proc/sys/fs/file-max differ?

    - by bantic
    I notice that on a new CentOS image that I just booted up off of EC2 that the ulimit default is 1024 open files, but /proc/sys/fs/file-max is set at 761,408 and I'm wondering how these two limits work together. I'm guessing that ulimit -n is a per-user limit of number of file descriptors while /proc/sys/fs/file-max is system-wide? If that's the case, say I've logged in twice as the same user -- does each logged-in user have a 1024 limit on number of open files, or is it a limit of 1024 combined open files between each of those logged-in users? And is there much performance impact to setting your max file descriptors to a very high number, if your system isn't ever opening very many files?

    Read the article

  • using nmap to guess remote OS and probe service details on a single port only

    - by WoJ
    I am looking at scanning with nmap a large network in order to identify the OS of devices (-O--osscan-limit) probe for details of a service on a single port (I would have used -sV for all open ports) The problem is that -sV will probe all the ports (which I do not want to do for performance reasons) and I cannot use -p to limit the ports to the one I am interested in as this impacts the OS fingerprinting. I could not find anything in the manual to limit the service probing. Thank you for any ideas (including other approaches outside of nmap, though I would prefer to stick to nmap)

    Read the article

  • Creating form object for variable kind of form.

    - by Bunny Rabbit
    i want to create a form for users to submit questions in django ..so far the models i have created are class Question(models.Model): statement=models.CharField(max_length=100) class Choice(models.Model): statement=models.CharField(max_length=100) value=models.IntegerField() question=models.ForeignKey(Question) Now i want to write a Form class for creating a above form but the problem is the number of choices are variable,a user can decide how many choices a question must have .How do i do that in django?

    Read the article

  • Can XPath concatenate two nodeset values? (for use in XForms)

    - by iHeartGreek
    Hi! I am wanting to concatenate two nodeset values using XPath in XForms. I know that XPath has a concat(string, string) function, but how would I go about concatenating two nodeset values? BEGIN EDIT: I tried concat function.. I tried this.. and variations of it to make it work, but it doesn't <xf:value ref="concat(instance('param_choices')/choice/root, .)"/> END EDIT Below is a simplified code example of what I am trying to achieve. XForms model: <xf:instance id="param_choices" xmlns=""> <choices> <root label="Param Choices">/param</root> <choice label="Name">/@AAA</choice> <choice label="Value">/@BBB</choice> </choices> </xf:instance> XForms ui code that I currently have: <xf:select ref="instance('criteria_data')/criteria/criterion" appearance="full"> <xf:label>Param choices:</xf:label> <br/> <xf:itemset nodeset="instance('param_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> (if user selects "Name" checkbox..) the XML output is: <criterion>/@BBB</criterion> However! I want to combine the root nodeset value with the current choice nodeset value. Essentially: <xf:value ref="(instance('definition_choices')/choice/root) + ."/> to achieve the following XML output: <criterion>/param/@BBB</criterion> Any suggestions on how to do this? (I am fairly new to XPath and XForms) p.s. what I am asking makes sense to me when I typed it out, but if you have trouble figuring out what I'm asking, just let me know.. thanks!

    Read the article

  • Traffic shaping L2TP/IPsec VPN (via accounts not connection)

    - by Cromulent
    I need to be able to control the amount of bandwidth a specific user account can use on a VPN connection. One account I want to be able to use the VPN with no restrictions and another account I want to limit to a reasonable amount of bandwidth (say 10GB or so a month). I'm aware that you can traffic shape individual connections but that does not quite solve the problem as the limited account can just disconnect and reconnect to get a new connection. I need to be able to limit bandwidth on a login basis for a given period of time (monthly limit). I'm really not that familiar with traffic shaping in general so any advice would be appreciated. Thank you.

    Read the article

  • How to assign XML attribute values to drop down list using XSL

    - by Vijay
    Hi, I have a sample xml as; <?xml version="1.0" encoding="iso-8859-9"?> <DropDownControl id="dd1" name="ShowValues" choices="choice1,choice2,choice3,choice4"> </DropDownControl > I need to create a UI representation of this XML using XSL. I want to fill the drop down list with values specified in choices attribute. Does anyone have any idea about this ? Thanks in advance :)

    Read the article

  • Django: Template should render 'description' not actual value

    - by Till Backhaus
    Hi, in a Model I have a CharField with choices: class MyModel(models.Model): THE_CHOICES=( ('val',_(u'Value Description')), ) ... myfield=models.CharField(max_length=3,choices=THE_CHOICES Now in the template I access an instance of MyModel: {{ my_instance.myfield }} Of course the gives me val instead of Value Description. How do I get the description? Thanks in advance!

    Read the article

  • Will 5 Terabyte NAS drive be compatible with Windows XP SP3 32 bit?

    - by TrevorBoydSmith
    (NOTE: The operating system (in this case Windows XP SP3 32 bit) we are using is not a choice.) I am trying to setup a short term storage device. First, I found a large 5 Terabyte NAS drive that would IMO fulfill my storage requirements. Second, I also found that Windows XP seems to have a hard drive size limit (see 'Is there a limit to the size of a hard drive for Windows XP pre-SP1?'): XP should handle up to 2 TB per volume after the service packs are applied. You are correct. There was a 137gb limit on the orginal pre service pack windows xp. This was addressed/fixed in SP1. My question is, will my Windows XP SP3 32 bit machine see the 5 Terabyte NAS and be able to read/write properly to the NAS drive?

    Read the article

  • Open table cache in MySQL

    - by vvanscherpenseel
    I have my open table cache set to 1800 and I have a total of 1112 tables. MySQL Tuning Primer reports that 100% of my table cache is used yet my table cache hit rate is 5%. I understand that this happens due to concurrent connections all opening tables. I think I should raise the cache limit. I understand that the cache size is limited by the file descriptor limit of my operating system, but are there any other practical limitations I should be aware of? Searching Google or this very website yields mostly posts explaining the connection-factor or come up with indecisive answers. My question: can I safely increase the open table cache limit? Is there a maximum?

    Read the article

  • Twitter User/Search Feature Header Support in LINQ to Twitter

    - by Joe Mayo
    LINQ to Twitter’s goal is to support the entire Twitter API. So, if you see a new feature pop-up, it will be in-queue for inclusion. The same holds for the new X-Feature… response headers for User/Search requests.  However, you don’t have to wait for a special property on the TwitterContext to access these headers, you can just use them via the TwitterContext.ResponseHeaders collection. The following code demonstrates how to access the new X-Feature… headers with LINQ to Twitter: var user = (from usr in twitterCtx.User where usr.Type == UserType.Search && usr.Query == "Joe Mayo" select usr) .FirstOrDefault(); Console.WriteLine( "X-FeatureRateLimit-Limit: {0}\n" + "X-FeatureRateLimit-Remaining: {1}\n" + "X-FeatureRateLimit-Reset: {2}\n" + "X-FeatureRateLimit-Class: {3}\n", twitterCtx.ResponseHeaders["X-FeatureRateLimit-Limit"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Remaining"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Reset"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Class"]); The query above is from the User entity, whose type is Search; allowing you to search for the Twitter user whose name is specified by the Query parameter filter. After materializing the query, with FirstOrDefault, twitterCtx will hold all of the headers, including X-Feature… that Twitter returned.  Running the code above will display results similar to the following: X-FeatureRateLimit-Limit: 60 X-FeatureRateLimit-Remaining: 59 X-FeatureRateLimit-Reset: 1271452177 X-FeatureRateLimit-Class: namesearch In addition to getting the X-Feature… headers a capability you might have noticed is that the TwitterContext.ResponseHeaders collection will contain any HTTP that Twitter sends back to a query. Therefore, you’ll be able to access new Twitter headers anytime in the future with LINQ to Twitter. @JoeMayo

    Read the article

  • Fix: Azure Disabled over 49 cents? Beware of using a Java Virtual Machine on Microsoft Azure

    - by Ken Cox [MVP]
    I love my MSDN Azure account. I can spin up a demo/dev app or VM in seconds. In fact, it is so easy to create a virtual machine that Azure shut down my whole account! Last night I spun up a Java Virtual Machine to play with some Android stuff. My mistake was that I didn’t read the Virtual Machine pricing warning: “I have a MSDN Azure Benefit subscription. Can I use my monthly Azure credits to purchase Oracle software?” “No, Azure credits in our MSDN offers are not applicable to Oracle software. In order to purchase Oracle software in the MSDN Azure Benefit subscription, customers need to turn off their {0} spending limit and pay at the regular pay-as-you-go rate. Otherwise, Oracle usage will hit the {1} spending limit and the subscription will be immediately disabled.”  Immediately disabled? Yup. Everything connected to the subscription was shut off, deallocated, rendered useless - even the free Web sites and the free Sendgrid email service.  The fix? I had to remove the spending limit from my account so I could pay $0.49 (49 cents) for the JVM usage. I still had $134.10 in credits remaining for regular usage with 6 days left in the billing month.  Now the restoration/clean-up begins… figuring out how to get the web sites and services back online.  To me, the preferable way would be for Azure to warn me when setting up a JVM that I had no way of paying for the service. In the alternative, shut down just the offending services – the ones that can’t be covered by the regular credits. What a mess.

    Read the article

  • How to Upload Really Large Files to SkyDrive, Dropbox, or Email

    - by Matthew Guay
    Do you need to upload a very large file to store online or email to a friend? Unfortunately, whether you’re emailing a file or using online storage sites like SkyDrive, there’s a limit on the size of files you can use. Here’s how to get around the limits. Skydrive only lets you add files up to 50 MB, and while the Dropbox desktop client lets you add really large files, the web interface has a 300 MB limit, so if you were on another PC and wanted to add giant files to your Dropbox, you’d need to split them. This same technique also works for any file sharing service—even if you were sending files through email. There’s two ways that you can get around the limits—first, by just compressing the files if you’re close to the limit, but the second and more interesting way is to split up the files into smaller chunks. Keep reading for how to do both. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • deWitters Game loop in libgdx(Android)

    - by jaysingh
    I am a beginner and I want a complete example in LibGDX for android(Fixed time game loop) how to limit the framerate to 50 or 60. Also how to mangae interpolation between game state with simple example code e.g. deWiTTERS Game Loop: @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } libgdx comments:- There is a Gdx.graphics.setVsync() method (generic = backend-independant), but it is not present in 0.9.1, only in the Nightlies. "Relying on vsync for fixed time steps is a REALLY bad idea. It will break on almost all hardware out there. See LwjglApplicationConfiguration, there's a flag in there that let s use toggle gpu/software vsynching. Play around with it." (Mario) NOTE that none of these limit the framerate to a specific value... if you REALLY need to limit the framerate for some reason, you'll have to handle it yourself by returning from render calls if xxx ms haven't passed since the last render call. li

    Read the article

  • Search inside Xournal files (.xoj)

    - by Javad Sadeqzadeh
    I'm a big fan of Evernote, I use it regularly. However, it has a 60MB storage limit (although text files are not going to occupy much space, but the limitation concern still remains). Today, I installed Xournal, which has great features like annotating, nice background, free hand shapes and notes, save in PDF format, and many more. But the big problem is that as far as I've noticd, there is no intrinsic feature for seach inside the notes (created using Xournal with .xoj suffix). I used Catfish File Search application (which creates bash commands for full text search), but it couldn't help as well. Is there anyway to search inside a .xoj file at all? If so, it could be a suitable alternative to evernote, if you put your .xoj files on a cloud (which certainly offers you much more storage space than 60MB). If not, is there any other convenient app similar to Evernote, but with higher storage limit or without a limit? Somebody suggested Zim desktop wiki app, which looks great, but I'm nut sure if I could copy and paste everything there (a mixture of photos and tables and text with various formats and highlights), like what I do with Evernote. And a very useful tool I use is Evernote Web Clipper (browser extension). Of course, having a desktop client like Everpad is a plus, but not the absolute need. PS: I use pocket, so please do suggest that (it only preserve links (which might change over time) not the actual text). I also use google drive or docs, I don't like that for this purpose niether, it's too slow, doesn't have a browser extension and a desktop client. Thank you so much in advance.

    Read the article

  • Using ConcurrentQueue for thread-safe Performance Bookkeeping.

    - by Strenium
    Just a small tidbit that's sprung up today. I had to book-keep and emit diagnostics for the average thread performance in a highly-threaded code over a period of last X number of calls and no more. Need of the day: a thread-safe, self-managing stats container. Since .NET 4.0 introduced new thread-safe 'Collections.Concurrent' objects and I've been using them frequently - the one in particular seemed like a good fit for storing each threads' performance data - ConcurrentQueue. But I wanted to store only the most recent X# of calls and since the ConcurrentQueue currently does not support size constraint I had to come up with my own generic version which attempts to restrict usage to numeric types only: unfortunately there is no IArithmetic-like interface which constrains to only numeric types – so the constraints here here aren't as elegant as they could be. (Note the use of the Average() method, of course you can use others as well as make your own).   FIFO FixedSizedConcurrentQueue using System;using System.Collections.Concurrent;using System.Linq; namespace xxxxx.Data.Infrastructure{    [Serializable]    public class FixedSizedConcurrentQueue<T> where T : struct, IConvertible, IComparable<T>    {        private FixedSizedConcurrentQueue() { }         public FixedSizedConcurrentQueue(ConcurrentQueue<T> queue)        {            _queue = queue;        }         ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();         public int Size { get { return _queue.Count; } }        public double Average { get { return _queue.Average(arg => Convert.ToInt32(arg)); } }         public int Limit { get; set; }        public void Enqueue(T obj)        {            _queue.Enqueue(obj);            lock (this)            {                T @out;                while (_queue.Count > Limit) _queue.TryDequeue(out @out);            }        }    } }   The usage case is straight-forward, in this case I’m using a FIFO queue of maximum size of 200 to store doubles to which I simply Enqueue() the calculated rates: Usage var RateQueue = new FixedSizedConcurrentQueue<double>(new ConcurrentQueue<double>()) { Limit = 200 }; /* greater size == longer history */   That’s about it. Happy coding!

    Read the article

  • Are jQuery's :first and :eq(0) selectors functionally equivalent?

    - by travis
    I'm not sure whether to use :first or :eq(0) in a selector. I'm pretty sure that they'll always return the same object, but is one speedier than the other? I'm sure someone here must have benchmarked these selectors before and I'm not really sure the best way to test if one is faster. Update: here's the bench I ran: /* start bench */ for (var count = 0; count < 5; count++) { var i = 0, limit = 10000; var start, end; start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $thisFrame.parents("div.RadEditor.Telerik:eq(0)"); } end = new Date(); alert("div.RadEditor.Telerik:eq(0) : " + (end-start)); var start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $thisFrame.parents("div.RadEditor.Telerik:first"); } end = new Date(); alert("div.RadEditor.Telerik:first : " + (end-start)); start = new Date(); for (i = 0; i < limit; i++) { var radeditor = $thisFrame.parents("div.RadEditor.Telerik")[0]; } end = new Date(); alert("(div.RadEditor.Telerik)[0] : " + (end-start)); start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $($thisFrame.parents("div.RadEditor.Telerik")[0]); } end = new Date(); alert("$((div.RadEditor.Telerik)[0]) : " + (end-start)); } /* end bench */ I assumed that the 3rd would be the fastest and the 4th would be the slowest, but here's the results that I came up with: FF3: :eq(0) :first [0] $([0]) trial1 5275 4360 4107 3910 trial2 5175 5231 3916 4134 trial3 5317 5589 4670 4350 trial4 5754 4829 3988 4610 trial5 4771 6019 4669 4803 Average 5258.4 5205.6 4270 4361.4 IE6: :eq(0) :first [0] $([0]) trial1 13796 15733 12202 14014 trial2 14186 13905 12749 11546 trial3 12249 14281 13421 12109 trial4 14984 15015 11718 13421 trial5 16015 13187 11578 10984 Average 14246 14424.2 12333.6 12414.8 I was correct about just returning the first native DOM object being the fastest ([0]), but I can't believe the wrapping that object in the jQuery function was faster that both :first and :eq(0)! Unless I'm doing it wrong.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >