Search Results

Search found 4258 results on 171 pages for 'maximum degree of paralle'.

Page 105/171 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Rate limiting Django admin login with Nginx to prevent dictionary attack

    - by shreddies
    I'm looking into the various methods of rate limiting the Django admin login to prevent dictionary attacks. One solution is explained here: simonwillison.net/2009/Jan/7/ratelimitcache/ However, I would prefer to do the rate limiting at the web server side, using Nginx. Nginx's limit_req module does just that - allowing you to specify the maximum number of requests per minute, and sending a 503 if the user goes over: http://wiki.nginx.org/NginxHttpLimitReqModule Perfect! I thought I'd cracked it until I realised that Django admin's login page is not in a consistent place, eg /admin/blah/ gives you a login page at that URL, rather than bouncing to a standard login page. So I can't match on the URL. Can anyone think of another way to know that the admin page was being displayed (regexp the response HTML?)

    Read the article

  • Regex for [a-zA-Z0-9\-] with dashes allowed in between but not at the start or end

    - by orokusaki
    I'm using Python and I'm not trying to extract the value, but rather test to make sure it fits the pattern. allowed values: spam123-spam-eggs-eggs1 spam123-eggs123 spam 123 eggs123 I just can't have a dash at the starting or the end. There is a question on here that works in the opposite direction by getting the string value after the fact, but I simply need to test for the value so that I can disallow it. Also, it can be a maximum of 25 chars long, but a minimum of 4 chars long. Here's what I've come up with after some experimentation with lookbehind, etc: # Nothing here

    Read the article

  • using Excel VBA, given the daily price of 50 stocks, choose 10 stocks such that they have the minumu

    - by correl
    The high-level goal is to choose 10 stocks that have the lowest correlation among one another, out of a pool of 50, so that I can have a well-diversified portfolio. I have managed to write some VBA macro to download the past 3 years of daily price data from Yahoo finance, and then compute the 50x50 correlation matrix (using the Correl function), using the daily close as the data. What I have tried so far is just some local-maximum heuristic: - For the two stocks that have the highest correlation with each other, remove one of them. Between the two, remove the one that has the higher average correlation with all the other stocks. - When I remove a stock from the pool, I just delete the correponding row and column, to give a smaller matrix. - Repeat until I have just 10 stocks remaining (a 10x10 matrix). I was wondering if there is some algorithm that already solves such a problem and gives the optimum solution?

    Read the article

  • Jira Conventions and Best-Practices.

    - by Amby
    I have been using Jira since 6months but haven;t been through any document related to various options available and how to use them for maximum output. There must be some conventions that help in better tracking of the issue. For instance, Logging work, Linking issues, creating sub-tasks. It would be of help if you can share some of the features (and the conventions) that you follow while using Jira. It may vary from team-to-team but there must be some generic rules which can be followed. Any feedback would be of help. Thanks.

    Read the article

  • jQuery: how to produce a ProgressBar from given markup

    - by Richard Knop
    So I'm using the ProgressBar JQuery plugin (http://t.wits.sg/misc/jQueryProgressBar/demo.php) to create some static progress bars. What I want to achieve is to from this markup: <span class="progress-bar">10 / 100</span> produce a progress bar with maximum value of 100 and current value of 10. I am using html() method to get the contents of the span and then split() to get the two numbers: $(document).ready(function() { $(".progress-bar").progressBar($(this).html().split(' / ')[0], { max: $(this).html().split(' / ')[1], textFormat: 'fraction' }); }); That doesn't work, any suggestions? I'm pretty sure the problem is with $(this).html().split(' / ')[0] and $(this).html().split(' / ')[1], is that a correct syntax?

    Read the article

  • Issues with Ext-JS 1.1 date fields and Firefox 3.x / IE 8

    - by Cruachan
    I'm modifying an older website for a client that uses Ext-JS 1.1 and I'm having issues with display of date fields in IE and particularly Firefox. The site was left in a semi-implemented state previously, so there's not been a perceived problem before. In Chrome and Safari everything looks fine and the datepicker drops down and displays correctly. However in Firefox the picker is displayed widened to cover the maximum scrollable brower width (very wide indeed), and in IE it's truncated to about two thirds of the width it should be. I am somewhat uncertain that this is due to our css, but because Chrome and Safari work fine I think it might be a problem with Ext-js itself. I realise that this is an old version of Ext-JS, but because everything else works fine I don't want to go to the trouble of upgrading unless that would be very straightforward (but how difficult would that be?) I don't myself use ExtJS and this is the only website my client has with it - so I'm really looking for the simplest possible solution.

    Read the article

  • Specify which row to return on SQLite Group By

    - by lozzar
    I'm faced with a bit of a difficult problem. I store all the versions of all documents in a single table. Each document has a unique id, and the version is stored as an integer which is incremented everytime there is a new version. I need a query that will only select the latest version of each document from the database. While using GROUP BY works, it appears that it will break if the versions are not inserted in the database in the order of version (ie. it takes the maximum ROWID which will not always be the latest version). Note, that the latest version of each document will most likely be a different number (ie. document A is at version 3, and document B is at version 6). I'm at my wits end, does anybody know how to do this (select all the documents, but only return a single record for each document_id, and that the record returned should have the highest version number)?

    Read the article

  • Distributing points over a surface within boundries

    - by vise
    I'm interested in a way (algorithm) of distributing a predefined number of points over a 4 sided surface like a square. The main issue is that each point has got to have a minimum and maximum proximity to each other (random between two predefined values). Basically the distance of any two points should not be closer than let's say 2, and a further than 3. My code will be implemented in ruby (the points are locations, the surface is a map), but any ideas or snippets are definitely welcomed as all my ideas include a fair amount of brute force.

    Read the article

  • Why is my masm32 program crashing whenever I try using interrupts?

    - by incrediman
    Here's the code: .386 ;target for maximum compatibility .model small,stdcall ;model .code main: int 20h END main Result: http://img705.imageshack.us/img705/3738/resultom.png "test.exe has stopped working" - always right when it reaches the interrupt. This is the interrupt I'm trying to use. It should simply exit the program. Others I've tried include character input/output, etc.. Nothing works. I'm on windows 7, using masm32 with the WinAsm IDE. There are so many cool things it seems I should be able to do with interrupts... however, it crashes whenever I try to use an interrupt - always the same way. This seems related and possibly useful: http://stackoverflow.com/questions/1414260/dos-interrupt-in-masm-x86-assembly-crashing ...but I haven't really been able to figure anything out from it. Any suggestions?

    Read the article

  • Programatically detect number of physical processors/cores or if hyper-threading is active on Window

    - by HTASSCPP
    I have a multithreaded c++ application that runs on Windows, Mac and a few Linux flavours. To make a long story short: Inorder for it to run at maximum efficiency I have to be able to instantiate a single thread per physical processor/core. Creating more threads than there are physical processors/cores degrades the performance of my program considerably. I can already correctly detect the number of logical processors/cores correctly on all three of these platforms. To be able to detect the number of physical processors/cores correctly I'll have to detect if hyper-treading is supported AND active. My question therefore is if there is a way to detect whether hyperthreading is supported AND ENABLED? If so, how exactly.

    Read the article

  • How to deal with RGB to YUV conversion

    - by maximus
    The formula says: Y = 0.299 * R + 0.587 * G + 0.114 * B; U = -0.14713 * R - 0.28886 * G + 0.436 * B; V = 0.615 * R - 0.51499 * G - 0.10001 * B; What if, for example, the U variable becomes negative? U = -0.14713 * R - 0.28886 * G + 0.436 * B; Assume maximum values for R and G (ones) and B = 0 So, I am interested in implementing this convetion function in OpenCV, So, how to deal with negative values? Using float image? anyway please explain me, may be I don't understand something..

    Read the article

  • how to preload more than one but not all images of a slideshow with jquery

    - by wtip
    I'd like to create a web based stop motion video player. Basically a slideshow that shows 2-4 images per second. Each image might be a maximum of 20KB. I don't want to preload all images in the slideshow as there might be thousands, however I need to preload more than just the next image in the show as this will not playback fast enough (because of the playback speed the browser needs to be loading more than one image at a time). I've been looking at using the jQuery Cycle Plugin (http://malsup.com/jquery/cycle/) with a addSlide type function but don't know how to make it work. Would something like this might work? -Slideshow starts -image is played back -preloader will attempt to load up to the next 60 images -playback will wait for the next image in line to completely load, but will not wait for all 59 others. The playback / preloading order is important for this application.

    Read the article

  • Why does jruby complain about valid java_opts

    - by brad
    I have set my java min/max heap size to be the same as outlined in the Sun Docs for precise heap sizing using the following: -Xms768m -Xmx768m This works fine when I start tomcat, but if I run jruby from the command line it complains saying: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified I read in the jruby docs about some -J-X params but it seems silly that I would need to explicitly override my normal jvm settings. The problem arises when I do a deploy. I try running jruby -S rake db:migrate on my server and it complains. Is it true that I need to explicitly override my JVM settings when running jruby? It seems as though ANY Xms/Xmx values cause jruby to complain. Update So it seems that some settings do in fact work. For instance all of these work: Xmx256m Xms256m Xmx512m Xms256m Xmx512m Xms500m But these don't: Xmx512m Xms512m Xmx512m Xms501m Xmx768m Xms512m

    Read the article

  • Declaring data types in SQLite

    - by dan04
    I'm familiar with how type affinity works in SQLite: You can declare column types as anything you want, and all that matters is whether the type name contains "INT", "CHAR", "FLOA", etc. But is there a commonly-used convention on what type names to use? For example, if you have an integer column, is it better to distinguish between TINYINT, SMALLINT, MEDIUMINT, and BIGINT, or just declare everything as INTEGER? So far, I've been using the following: INTEGER REAL CHAR(n) -- for strings with a known fixed with VARCHAR(n) -- for strings with a known maximum width TEXT -- for all other strings BLOB BOOLEAN DATE -- string in "YYYY-MM-DD" format TIME -- string in "HH:MM:SS" format TIMESTAMP -- string in "YYYY-MM-DD HH:MM:SS" format (Note that the last three are contrary to the type affinity.)

    Read the article

  • Are there any JSF components for implementing breadcrumb navigation?

    - by kazanaki
    As far as I know there are two "kinds" of breadcrumbs. The static/hierarchy one Works like a stack Entries are pushed when a user goes "deeper" into the site Entries are poped when user goes "up" into the site Is the same for all users (for a given page) Shows location rather than history A simple Example would be HOME - BIG CATEGORY - SMALL CATEGORY - ARTICLE The dynamic/historical one Works like a queue Entries are pushed at the end when a user goes to another page Entries are removed from the front when the maximum size is reached Is different for each user, since it is personalized. Shows timeline/history instead of location. A simple example would be SMALL CATEGORY - HOME - BIG CATEGORY - HOME The question is: Are there any ready-made JSF component for these types of navigation?

    Read the article

  • Scala regex Named Capturing Groups

    - by Brent
    In scala.util.matching.Regex trait MatchData I see that there support for groupnames (Named Capturing Groups) But since Java does not support groupnames until version 7 as I understand it, Scala version 2.8.0.RC4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6. gives me this exception: scala> val pattern = """(?<login>\w+) (?<id>\d+)""".r java.util.regex.PatternSyntaxException: Look-behind group does not have an obvio us maximum length near index 11 (?<login>\w+) (?<id>\d+) ^ at java.util.regex.Pattern.error(Pattern.java:1713) at java.util.regex.Pattern.group0(Pattern.java:2488) at java.util.regex.Pattern.sequence(Pattern.java:1806) at java.util.regex.Pattern.expr(Pattern.java:1752) at java.util.regex.Pattern.compile(Pattern.java:1460) So the question is Named Capturing Groups supported in Scala? If so any examples out there? If not I might look into the Named-Regexp lib from clement.denis.

    Read the article

  • 32 core (each physical core) 2.2 GhZ or 12 core (6 physical cores) 3.0GHZ?

    - by Tejaswi Rana
    I am working on a multithreaded application (Forex trading app built on C#) and had the client upgrade from the 12 core 3.0GHZ machine (Intel) to a 32 core 2.2 Ghz machine (AMD). The PassMark benchmark results were significantly higher when using multicores doing Integer, Floating and other calculations while for a single core calculation it was a bit slower than the pack (others that were being compared to with similar config as the 12 core one). Oh it also comes with 64 GB RAM (4 times as the other one) and a much faster SSD. So after configuring and running the application on that machine, not only did it not perform as well, it was significantly slower. We're talking about 30seconds - 1 minute slower on an app that usually completes processing within 5-20 secs. The application uses MAX DEGREE of PARALLELISM (TPL) which I've tried setting to number of cores and also half of that. I've also tried running single threaded and without setting any limits in parallel threading. While it may be the hardware has some issues, I am wondering if the CPU processing speed is the issue. I can overclock to 3.0 GHZ. But is that even a good idea? Server Info - AMD http://www.passmark.com/forum/showthread.php?4013-AMD-Dual-6272-performance-is-60-lower-than-benchmarks Seems that benchmark was wrong to start with - officially. Intel i7 3930k OS (same in both) Windows 7 Professional 64-bit

    Read the article

  • Good way to get the key of the highest value of a Dictionary in C#

    - by Arda Xi
    I'm trying to get the key of the maximum value in the Dictionary<double, string> results. This is what I have so far: double max = results.Max(kvp => kvp.Value); return results.Where(kvp => kvp.Value == max).Select(kvp => kvp.Key).First(); However, since this seems a little inefficient, I was wondering whether there was a better way to do this.

    Read the article

  • Predict Stock Market Values

    - by mrlinx
    I'm building a web semantic project that gathers the maximum ammount of historic data about a certain company and tries to predict its future market stock values. For data I have the historic stock values (not normalized), news (0 to 1 polarity) and subjective content (also a 0 to 1 polarity). What is the best AI system to train and use for this kind of objective? Is a simple NN with back-propagation training the best I can hope for? update: Everyone is concerned about the quality of this system. Although I'm pretty sure the system is as good as a random prediction (or even worse), this is a school project around artificial intelligence and web semantics. Therefore I'm only concerned in picking the best kind of train method for the data I have (NN, RBF, SVM, Bayes, neuro-fuzzy, etc). Its not about making money.

    Read the article

  • Java App Engine - ranked counter

    - by Richard
    I understand the sharded counter, here: http://code.google.com/appengine/articles/sharding_counters.html The problem is that a simple counter will not work in my application. I am sorting my entities by a particular variable so I am returned not so much a count, but more of a rank. My current method is: SELECT COUNT(this) FROM Entity.class WHERE value <= ? Result + 1 is then the rank of the parameter in relation to the value variable in the persistent Entity objects. The limitation of this is the highest rank being returned is 1001 because count() can give a maximum of 1000. The reason I cannot store the rank on the Entity object is that the ranks are updated very often, and re-setting this rank variable would be much too costly. Any ideas on the best way to accomplish this?

    Read the article

  • Using numeric_limits::max() in constant expressions

    - by FireAphis
    Hello, I would like to define inside a class a constant which value is the maximum possible int. Something like this: class A { ... static const int ERROR_VALUE = std::numeric_limits<int>::max(); ... } This declaration fails to compile with the following message: numeric.cpp:8: error: 'std::numeric_limits::max()' cannot appear in a constant-expression numeric.cpp:8: error: a function call cannot appear in a constant-expression I understand why this doesn't work, but two things look weird to me: It seems to me a natural decision to use the value in constant expressions. Why did the language designers decide to make max() a function thus not allowing this usage? The spec claims in 18.2.1 that For all members declared static const in the numeric_limits template, specializations shall define these values in such a way that they are usable as integral constant expressions. Doesn't it mean that I should be able to use it in my scenario and doesn't it contradict the error message? Thank you.

    Read the article

  • Web Services: more frequent "small" calls, or less frequent "big" calls

    - by Klay
    In general, is it better to have a web application make lots of calls to a web service getting smaller chunks of data back, or to have the web app make fewer calls and get larger chunks of data? In particular, I'm building a Silverlight app that needs to get large amounts of data back from the server in response to a query created by a user. Each query could return anywhere from a few hundred records to a few thousand. Each record has around thirty fields of mostly decimal-type data. I've run into the situation before where the payload size of the response exceeded the maximum allowed by the service. I'm wondering whether it's better (more efficient for the server/client/web service) to cut this payload vertically--getting all values for a single field with each call--or horizontally--getting batches of complete records with each call. Or does it matter?

    Read the article

  • prevent bots to query my database several times

    - by Alain
    Hi all, I'm building an application that is a kind of registry. Think about the dictionary: you lookup for a word and it return something if the word is founded. Now, that registry is going to store valuable informations about companies, and some could be tempted to get the complete listing. My application use EJB 3.0 that replies to WS. So I was thinking about permits a maximum of 10 query per IP address per day. Storing the IP address and a counter on a table that would be empty by a script every night. Is it a good idea/practice to do so? If yes, how can I get the IP address on the EJB side? Is there a better way to prevent something to get all the data from my database? I've also though about CAPTCHA but I think it's a pain for the user, and sometime, they are difficult to read even for real human. Hope it's all clear since I'm not english... Thanks Alain

    Read the article

  • log4j.xml configuration with <rollingPolicy> and <triggeringPolicy>

    - by Mike Smith
    I try to configure log4j.xml in such a way that file will be rolled upon file size, and the rolled file's name will be i.e: "C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" I followed this discussion: http://web.archiveorange.com/archive/v/NUYyjJipzkDOS3reRiMz Finally it worked for me only when I add: try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } to the method: public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) which make it works. The question is if there is a better way to make it work? since this method call many times and slow my program. Here is the code: package com.mypack.rolling; import org.apache.log4j.rolling.RollingPolicy; import org.apache.log4j.rolling.RolloverDescription; import org.apache.log4j.rolling.TimeBasedRollingPolicy; /** * Same as org.apache.log4j.rolling.TimeBasedRollingPolicy but acts only as * RollingPolicy and NOT as TriggeringPolicy. * * This allows us to combine this class with a size-based triggering policy * (decision to roll based on size, name of rolled files based on time) * */ public class CustomTimeBasedRollingPolicy implements RollingPolicy { TimeBasedRollingPolicy timeBasedRollingPolicy = new TimeBasedRollingPolicy(); /** * Set file name pattern. * @param fnp file name pattern. */ public void setFileNamePattern(String fnp) { timeBasedRollingPolicy.setFileNamePattern(fnp); } /* public void setActiveFileName(String fnp) { timeBasedRollingPolicy.setActiveFileName(fnp); }*/ /** * Get file name pattern. * @return file name pattern. */ public String getFileNamePattern() { return timeBasedRollingPolicy.getFileNamePattern(); } public RolloverDescription initialize(String file, boolean append) throws SecurityException { return timeBasedRollingPolicy.initialize(file, append); } public RolloverDescription rollover(String activeFile) throws SecurityException { return timeBasedRollingPolicy.rollover(activeFile); } public void activateOptions() { timeBasedRollingPolicy.activateOptions(); } } package com.mypack.rolling; import org.apache.log4j.helpers.OptionConverter; import org.apache.log4j.Appender; import org.apache.log4j.rolling.TriggeringPolicy; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.spi.OptionHandler; /** * Copy of org.apache.log4j.rolling.SizeBasedTriggeringPolicy but able to accept * a human-friendly value for maximumFileSize, eg. "10MB" * * Note that sub-classing SizeBasedTriggeringPolicy is not possible because that * class is final */ public class CustomSizeBasedTriggeringPolicy implements TriggeringPolicy, OptionHandler { /** * Rollover threshold size in bytes. */ private long maximumFileSize = 10 * 1024 * 1024; // let 10 MB the default max size /** * Set the maximum size that the output file is allowed to reach before * being rolled over to backup files. * * <p> * In configuration files, the <b>MaxFileSize</b> option takes an long * integer in the range 0 - 2^63. You can specify the value with the * suffixes "KB", "MB" or "GB" so that the integer is interpreted being * expressed respectively in kilobytes, megabytes or gigabytes. For example, * the value "10KB" will be interpreted as 10240. * * @param value * the maximum size that the output file is allowed to reach */ public void setMaxFileSize(String value) { maximumFileSize = OptionConverter.toFileSize(value, maximumFileSize + 1); } public long getMaximumFileSize() { return maximumFileSize; } public void setMaximumFileSize(long maximumFileSize) { this.maximumFileSize = maximumFileSize; } public void activateOptions() { } public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) { try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } boolean result = (fileLength >= maximumFileSize); return result; } } and the log4j.xml: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender"> <param name="file" value="C:/temp/test/test_log4j.log" /> <rollingPolicy class="com.mypack.rolling.CustomTimeBasedRollingPolicy"> <param name="fileNamePattern" value="C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" /> </rollingPolicy> <triggeringPolicy class="com.mypack.rolling.CustomSizeBasedTriggeringPolicy"> <param name="MaxFileSize" value="200KB" /> </triggeringPolicy> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <logger name="com.mypack.myrun" additivity="false"> <level value="debug" /> <appender-ref ref="FILE" /> </logger> <root> <priority value="debug" /> <appender-ref ref="console" /> </root> </log4j:configuration>

    Read the article

  • Consolidate data from many different databases into one with minimum latency

    - by NTDLS
    I have 12 databases totaling roughly 1.0TB, each on a different physical server running SQL 2005 Enterprise - all with the same exact schema. I need to offload this data into a separate single database so that we can use for other purposes (reporting, web services, ect) with a maximum of 1 hour latency. It should also be noted that these servers are all in the same rack, connected by gigabit connections and that the inserts to the databases are minimal (Avg. 2500 records/hour). The current method is very flakey: The data is currently being replicated (SQL Server Transactional Replication) from each of the 12 servers to a database on another server (yes, 12 different employee tables from 12 different servers into a single employee table on a different server). Every table has a primary key and the rows are unique across all tables (there is a FacilityID in each table). What are my options, these has to be a simple way to do this.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >