Search Results

Search found 3240 results on 130 pages for 'groupwise maximum'.

Page 36/130 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Does Google Maps API v3 allow larger zoom values ?

    - by Dr1Ku
    If you use the satellite GMapType using this Google-provided example in v3 of the API, the maximum zoom level has a scale of 2m / 10ft , whereas using the v2 version of another Google-provided example (had to use another one since the control-simple doesn't have the scale control) yields the maximum scale of 20m / 50ft. Is this a new "feature" of v3 ? I have to mention that I've tested the examples in the same GLatLng regions - so my guess is that tile detail level doesn't influence it, am I mistaken ? As mentioned in another question, v3 is to be considered of very Labs-y/beta quality, so use in production should be discouraged for the time being. I've been drawn to the subject since I have to "increase the zoom level of a GMap", the answers here seem to suggest using GTileLayer, and I'm considering GMapCreator, although this will involve some effort. What I'm trying to achieve is to have a larger zoom level, a scale of 2m / 10ft would be perfect, I have a map where the tiles aren't that hi-res and quite a few markers. Seeing that the area doesn't have hi-res tiles, the distance between the markers is really tiny, creating some problematic overlapping. Or better yet, how can you create a custom Map which allows higher zoom levels, as by the Google Campus, where the 2m / 10ft scale is achieved, and not use your own tileserver ? I've seen an example on a fellow Stackoverflower's GMaps sandbox , where the tiles are manually created based on the zoom level. I don't think I'm making any more sense, so I'm just going to end this big question here, I've been wondering around trying to find a solution for hours now. Hope that someone comes to my aid though ! Thank you in advance !

    Read the article

  • WCF Service Throttling

    - by Mubashar Ahmad
    Dear All I have a WCF Service Deployed in a Console App with BasicHTTPBinding and SSL enabled on port using NetSH command and more over following attribute is set as well. [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] And also i have set the Throttling behavior as <serviceThrottling maxConcurrentCalls="2147483647" maxConcurrentSessions="2147483647" maxConcurrentInstances="2147483647" /> On the other hand i have created a Test Client(for load test) that initiates multiple clients simultaneously(multiple threads) and performs transactions on server. everything seems fine and working properly but on server the CPU utilization is doesn't grow so i added some logging to view the number of concurrent calls to the server and found that its never went over 6. i have reviewed the performance counter logging code more than twice and it seems fine to me. So i want to ask where is the problem in this situation and one more thing i haven't specified any kind of ContextMode or ConcurrencyMode yet. After this Post I noticed that whenever i start another Intance of Test Client my concurrent Server Calls counter increase to 2 like if i am running only 1 instance the maximum Concurrent Rcvd Calls will be 2 and if there are two instance the same value goes to 4 and so on. Is there any limit of Number of WCF Calls from once process? Looking for help Mubashar *Added on 17-March******************* Today i ran another test with one test client(with 50 concurrent users) on the same machine on which the server is running this time i am getting exact result what i wanted it to show i.e. Maximum concurrent Calls Rcvd by Server = 50 but i need to do it the same on others machines as well. Can anybody help me on this.

    Read the article

  • iPad receiving memory warning with low memory use

    - by Fer
    I have an UIWebKit with a HTML, this HTML have several images and text, but just displaying it gives me the memory warning. So I did some tests: The same HTML with different images, fullsize, and after the same images but reduced 50% from it's original size, for the 50% reduced images, I went to preview and reduced all images in 50% The surprising part is the 50% test, you can see that even with 16 images, the memory peak is 4.90MB. That's really surprising. Notice that these values are not always the same, they change but there's not a huge difference between the tests. In the 50% issue, in the 8 and 16 images, although the memory is low, sometimes a memory warning appears, but the performance enhance is noticeable compared to the full size images standing still = memory after scrolling all article 1 Image = [standing still 5MB] [rotating 5.6MB] 2 Images = [standing still 6.99MB] [rotating 7.7MB] 3 Images = [standing still 9.04MB] [rotating 10.9MB] 4 Images = [standing still 10.89MB] [rotating 13.20MB] 8 Images = [standing still 23.14MB] [rotating 25.20MB] (sometimes crashes) 16 Images = [standing still 27.14MB and app crashes] 50% 1 Image = [standing still 3.2MB] [rotating 3.67MB] 2 Image = [standing still 3.2MB] [rotating 3.70MB] 3 Image = [standing still 3.3MB] [rotating 3.79MB] 4 Image = [standing still 3.3MB] [rotating 3.80MB] 8 Images = [standing still 4.29MB] [rotating 4,63MB] (sometimes crashes) 16 Images = [standing still 4.79MB] [rotating 4,90MB] (sometimes crashes) My question is: The app sometimes crashed with 16 small images. Why? The memory was much lower. What is the limit of memory use? These numbers are helpful if you also tell us the maximum. But, the maximum seemed different with the 50% size images. 13.2MB works for large images and 3.8 for small images. Anything higher sometimes crashes. That makes no sense.

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • Create ntp time stamp from gettimeofday

    - by krunk
    I need to calculate an ntp time stamp using gettimeofday. Below is how I've done it with comments on method. Look good to you guys? (minus error checking). Also, here's a codepad link. #include <unistd.h> #include <sys/time.h> const unsigned long EPOCH = 2208988800UL; // delta between epoch time and ntp time const double NTP_SCALE_FRAC = 4294967295.0; // maximum value of the ntp fractional part int main() { struct timeval tv; uint64_t ntp_time; uint64_t tv_ntp; double tv_usecs; gettimeofday(&tv, NULL); tv_ntp = tv.tv_sec + EPOCH; // convert tv_usec to a fraction of a second // next, we multiply this fraction times the NTP_SCALE_FRAC, which represents // the maximum value of the fraction until it rolls over to one. Thus, // .05 seconds is represented in NTP as (.05 * NTP_SCALE_FRAC) tv_usecs = (tv.tv_usec * 1e-6) * NTP_SCALE_FRAC; // next we take the tv_ntp seconds value and shift it 32 bits to the left. This puts the // seconds in the proper location for NTP time stamps. I recognize this method has an // overflow hazard if used after around the year 2106 // Next we do a bitwise AND with the tv_usecs cast as a uin32_t, dropping the fractional // part ntp_time = ((tv_ntp << 32) & (uint32_t)tv_usecs); }

    Read the article

  • EF 4.0 : Save Changes Retry Logic

    - by BGR
    Hi, I would like to implement an application wide retry system for all entity SaveChanges method calls. Technologies: Entity framework 4.0 .Net 4.0 namespace Sample.Data.Store.Entities { public partial class StoreDB { public override int SaveChanges(System.Data.Objects.SaveOptions options) { for (Int32 attempt = 1; ; ) { try { return base.SaveChanges(options); } catch (SqlException sqlException) { // Increment Trys attempt++; // Find Maximum Trys Int32 maxRetryCount = 5; // Throw Error if we have reach the maximum number of retries if (attempt == maxRetryCount) throw; // Determine if we should retry or abort. if (!RetryLitmus(sqlException)) throw; else Thread.Sleep(ConnectionRetryWaitSeconds(attempt)); } } } static Int32 ConnectionRetryWaitSeconds(Int32 attempt) { Int32 connectionRetryWaitSeconds = 2000; // Backoff Throttling connectionRetryWaitSeconds = connectionRetryWaitSeconds * (Int32)Math.Pow(2, attempt); return (connectionRetryWaitSeconds); } /// <summary> /// Determine from the exception if the execution /// of the connection should Be attempted again /// </summary> /// <param name="exception">Generic Exception</param> /// <returns>True if a a retry is needed, false if not</returns> static Boolean RetryLitmus(SqlException sqlException) { switch (sqlException.Number) { // The service has encountered an error // processing your request. Please try again. // Error code %d. case 40197: // The service is currently busy. Retry // the request after 10 seconds. Code: %d. case 40501: //A transport-level error has occurred when // receiving results from the server. (provider: // TCP Provider, error: 0 - An established connection // was aborted by the software in your host machine.) case 10053: return (true); } return (false); } } } The problem: How can I run the StoreDB.SaveChanges to retry on a new DB context after an error occured? Something simular to Detach/Attach might come in handy. Thanks in advance! Bart

    Read the article

  • Finding min and max values

    - by user01
    I am trying to write a simple program that reads integers from a data file and outputs the minimum and maximum value. The first integer of the input file will indicate how many more integers will be read, and then the integers will be listed. My program compiles without any problem, however, it returns values that are not part of the set in my test data file. Could anyone help with diagnose this issue? int main(){ FILE *fp = fopen("data.txt", "r"); int count; int num; int i; int min = 0; int max = 0; fscanf (fp, "%d", &count); for (i = 0; i < count; i++) fscanf( fp, "%d", &i); { if (num < min) min = num; if (num > max) max = num; } fclose (fp); printf("Of the %d integers, the minimum value is %d and the maximum value is %d \n", count, min, max);}

    Read the article

  • How to temporarily replace one primitive type with another when compiling to different targets in c#

    - by Keith
    How to easily/quickly replace float's for doubles (for example) for compiling to two different targets using these two particular choices of primitive types? Discussion: I have a large amount of c# code under development that I need to compile to alternatively use float, double or decimals depending on the use case of the target assembly. Using something like “class MYNumber : Double” so that it is only necessary to change one line of code does not work as Double is sealed, and obviously there is no #define in C#. Peppering the code with #if #else statements is also not an option, there is just too much supporting Math operators/related code using these particular primitive types. I am at a loss on how to do this apparently simple task, thanks! Edit: Just a quick comment in relation to boxing mentioned in Kyles reply: Unfortunately I need to avoid boxing, mainly since float's are being chosen when maximum speed is required, and decimals when maximum accuracy is the priority (and taking the 20x+ performance hit is acceptable). Boxing would probably rules out decimals as a valid choice and defeat the purpose somewhat. Edit2: For reference, those suggesting generics as a possible answer to this question note that there are many issues which count generics out (at least for our needs). For an overview and further references see Using generics for calculations

    Read the article

  • PyQt4: Hide widget and resize window

    - by masterLoki
    Hi everyone: I'm working with several widgets but the solution just won't come out. What I have is a series of buttons in series of QHBoxLayouts. Some buttons are hidden by default, but they will appear when needed. To solve space issues, all buttons have a minimum and maximum size so they always look well packed. Also I have a QTextEdit, visible by default, which is in a QVBoxLayout with the QHBoxLayout that hold the buttons So the problem is this: When I hide the QTextEdit and show the other buttons, the window won't resize. After searching I found that using self.ui.layout().setSizeConstraint(QtGui.QLayout.SetFixedSize) will do the trick, but the problem is that it takes the maximum size from all widgets, therefore I end a huge window. Doing self.ui.layout().setSizeConstraint(QtGui.QLayout.SetMinAndMaxSize) won't resize the window I already tried using self.ui.resize(0,0), and when doing a self.ui.layout().update() I got False (which I find odd, http://doc.trolltech.com/4.6/qlayout.html#activate), and also tried to override sizeHint() but it keeps using the max size for all widgets. Is there a way to resize the window and while taking care of the min and max size of a widget? Thanks in advance

    Read the article

  • Need help with an AJAX workflow

    - by Anders
    Sorry I couldn't be more descriptive with the title, I will elaborate fully below: I have a web application that I want to implement some AJAX functionality into. Currently, it is running ASP.NET 3.5 with VB.NET codebehind. My current "problem" is I want to dynamically be able to populate a DIV when a user clicks an item on a list. The list item currently contains a HttpUtility.UrlEncode() (ASP.NET) string of the content that should appear in the DIV. Example: <li onclick="setFAQ('The+maximum+number+of+digits+a+patient+account+number+can+contain+is+ten+(10).');"> What is the maximum number of digits a patient account number can contain?</li> I can decode the string partially with the JavaScript function unescape() but it does not fully decode the string. I would much rather pass the JavaScript function the faq ID then somehow pull the information from the database where it originates. I am 99% sure it is impossible to call an ASP function from within a JavaScript function, so I am kind of stumped. I am kind of new to AJAX/ASP.NET so this is a learning experience for me.

    Read the article

  • Auto-resize custom horizontal scrollbar/slider?

    I'm working on a site that scrolls horizontally. I know very little about jQuery and prototype and I got this slider script working as the page scrollbar. When the window is resized smaller, the custom horizontal slider/scrollbar won't resize to the viewport's width. I have the page and js files here http://keanetix.co.cc/scrollpage/ Try to resize the browser and you'll notice the custom scrollbar extending to the right. The custom scrollbar wont fit on the viewport, unless you refresh the page when it's been resized. Can somebody help me with this? Thanks. This is the code in the HTML page: <script type="text/javascript" language="javascript"> // <![CDATA[ // horizontal slider control var slider2 = new Control.Slider('handle', 'track', { onSlide: function(v) { scrollHorizontal(v, $('scrollable'), slider2); }, onChange: function(v) { scrollHorizontal(v, $('scrollable'), slider2); } }); // scroll the element horizontally based on its width and the slider maximum value function scrollHorizontal(value, element, slider) { element.scrollLeft = Math.round(value/slider.maximum*(element.scrollWidth-element.offsetWidth)); } // disable horizontal scrolling if text doesn't overflow the div if ($('scrollable').scrollWidth <= $('scrollable').offsetWidth) { slider2.setDisabled(); $('track').hide(); } // ]]> </script>

    Read the article

  • Eclipse does not start on Windows 7

    - by van
    Suddenly Eclipse today has decides to stop working. The last thing I did was close all perspectives and close Eclipse. When loading eclipse from the command prompt, using: "eclipse.exe -clean" the splash screen loads for a split second then exits. When I run the command: eclipsec -consoleLog -debug it results in the following output: Start VM: -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar -os win32 -ws win32 -arch x86_64 -showsplash d:\devtools\eclipse\\plugins\org.eclipse.platform_4.3.0.v20130605-20 00\splash.bmp -launcher d:\devtools\eclipse\eclipsec.exe -name Eclipsec --launcher.library d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher.win 32.win32.x86_64_1.1.200.v20130521-0416\eclipse_1503.dll -startup d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3.0.v201303 27-1440.jar --launcher.appendVmargs -product org.eclipse.epp.package.standard.product -consoleLog -debug -vm C:/Program Files/Java/jdk1.6.0_37/bin\..\jre\bin\server\jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified Checking Task Manager shows no Java process running and both the CPU and memory usage are very low. I have tried: Re-installing Eclipse Re-starting my machine But running eclipsec -consoleLog -debug from the command prompt still results in the issue: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified

    Read the article

  • How can I test that my hash function is good in terms of max-load?

    - by philcolbourn
    I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins): Probability that a bin is empty, for large n is 1/e. Expected number of empty bins is n/e. Probability that a bin has k collisions is <= 1/k!. Probability that a bin has at least k collisions is <= (e/k)**k. These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely. Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere. Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test? This lecture seems to cover it best, but I am no mathematician. http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf BTW, with high probability seems to mean 1 - 1/n.

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • How is a relative JMP (x86) implemented in an Assembler?

    - by Pindatjuh
    While building my assembler for the x86 platform I encountered some problems with encoding the JMP instruction: enc inst size in bytes EB cb JMP rel8 2 E9 cw JMP rel16 4 (because of 0x66 16-bit prefix) E9 cd JMP rel32 5 ... (from my favourite x86 instruction website, http://siyobik.info/index.php?module=x86&id=147) All are relative jumps, where the size of each encoding (operation + operand) is in the third column. Now my original (and thus fault because of this) design reserved the maximum (5 bytes) space for each instruction. The operand is not yet known, because it's a jump to a yet unknown location. So I've implemented a "rewrite" mechanism, that rewrites the operands in the correct location in memory, if the location of the jump is known, and fills the rest with NOPs. This is a somewhat serious concern in tight-loops. Now my problem is with the following situation: b: XXX c: JMP a e: XXX ... XXX d: JMP b a: XXX (where XXX is any instruction, depending on the to-be assembled program) The problem is that I want the smallest possible encoding for a JMP instruction (and no NOP filling). I have to know the size of the instruction at c before I can calculate the relative distance between a and b for the operand at d. The same applies for the JMP at c: it needs to know the size of d before it can calculate the relative distance between e and a. How do existing assemblers implement this, or how would you implement this? This is what I am thinking which solves the problem: First encode all the instructions to opcodes between the JMP and it's target, and if this region contains a variable-sized opcode, use the maximum size, i.e. 5 for JMP. Then in some conditions, the JMP is oversized (because it may fit in a smaller encoding): so another pass will search for oversized JMPs, shrink them, and move all instructions ahead), and set absolute branching instructions (i.e. external CALLs) after this pass is completed. I wonder, perhaps this is an over-engineered solution, that's why I ask this question.

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • Large Y-axis tickInterval in high charts does not work

    - by ckovacs
    I have a chart at this JSFiddle to demonstrate a problem where our charts are not respecting the y-axis tick interval for large values: http://jsfiddle.net/z2cDu/1/ var plots = {"usBytePlots":[[1362009600000,143663192997],[1362096000000,110184848742],[1362182400000,97694974247],[1362268800000,90764690805],[1362355200000,112436517747],[1362441600000,113563368701],[1362528000000,139579327454],[1362614400000,118406594506],[1362700800000,125366899935],[1362787200000,134189435596],[1362873600000,132873135854],[1362960000000,121002328604],[1363046400000,123138222001],[1363132800000,115667785553],[1363219200000,103746172138],[1363305600000,108602633473],[1363392000000,89133998142],[1363478400000,92170701458],[1363564800000,86696922873],[1363651200000,80980159054],[1363737600000,97604615694],[1363824000000,108011666339],[1363910400000,124419138381],[1363996800000,121704988344],[1364083200000,124337959109],[1364169600000,137495512348],[1364256000000,136017103319],[1364342400000,60867510427]],"dsBytePlots":[[1362009600000,1734982247336],[1362096000000,1471928923201],[1362182400000,1453869593201],[1362268800000,1411787942581],[1362355200000,1460252447519],[1362441600000,1595590020177],[1362528000000,1658007074783],[1362614400000,1411941908699],[1362700800000,1447659369450],[1362787200000,1643008799861],[1362873600000,1792357973023],[1362960000000,1575173242169],[1363046400000,1565139003978],[1363132800000,1549211975554],[1363219200000,1438411448469],[1363305600000,1380445413578],[1363392000000,1298319283929],[1363478400000,1194578344720],[1363564800000,1211409679299],[1363651200000,1142416351471],[1363737600000,1223822672626],[1363824000000,1267692136487],[1363910400000,1384335759541],[1363996800000,1577205919828],[1364083200000,1675715948928],[1364169600000,1517593781592],[1364256000000,1562183018457],[1364342400000,681007264598]],"aggregatedTotalBytes":43476367948896,"aggregatedUsBytes":3150320403841,"aggregatedDsBytes":40326047545055,"maxTotalBytes":328186292129,"maxTotalBitsPerSecond":30387619.641574074} ; $('#container').highcharts({ yAxis: { tickInterval: 53687091200 // 500 gigabytes. Maximum y-axis value is approx 1.8TB }, series : [ { color: 'rgba(80, 180, 77, 0.7)', type: 'areaspline', name : 'Downstream', data : plots.dsBytePlots, total: plots.aggregatedDsBytes }, { color: 'rgba(33, 143, 197, 0.7)', type: 'areaspline', name : 'Upstream', data : plots.usBytePlots, total: plots.aggregatedUsBytes }] }); In this example we are charting bandwidth utilization in bytes. The chart has a maximum value of about 1.8TB. We set the y-axis tick interval to exactly 500GB but the rendered y-axis ticks don't make any sense for the given interval.

    Read the article

  • Why would I be seeing execution timeouts when setting $_SESSION values?

    - by Kev
    I'm seeing the following errors in my PHP error logs: PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\index.php on line 3 PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\search.php on line 4 The lines in question are: index.php: 01 <?php 02 session_start(); 03 ob_start(); 04 error_reporting(E_All); 05 $_SESSION['nav'] = "range"; // <-- Error generated here search.php 01 <?php 02 03 session_start(); 04 $_SESSION['nav'] = "range"; 05 $_SESSION['navselected'] = 21; // <-- Error generated here Would it really take as long as 60+ seconds to assign a $_SESSION[] value? The platform is: Windows 2003 SP2 + IIS6 FastCGI for Windows 2003 (original RTM build) PHP 5.2.6 Non thread-safe build There aren't any issues with session data files being cleared up on the server as sessions expire. The oldest sess_XXXXXXXXXXXXXX file I'm seeing is around 2 hours old. There are no disk timeouts evident in the event logs or other such disk health issues that might suggest difficulty creating session data files. The site is also on a server that isn't under heavy load. The site is busy but not being hammered and is very responsive. It's just that we get these errors, three or four in a row, every three or four hours. I should also add that I'm not the original developer of this code and that it belongs to a customer who's developer has long since departed.

    Read the article

  • Simulated Annealing and Yahtzee!

    - by Jasie
    I've picked up Programming Challenges and found a Yahtzee! problem which I will simplify: There are 13 scoring categories There are 13 rolls by a player (comprising a play) Each roll must fit in a distinct category The goal is to find the maximum score for a play (the optimal placement of rolls in categories); score(play) returns the score for a play Brute-forcing to find the maximum play score requires 13! (= 6,227,020,800) score() calls. I choose simulated annealing to find something close to the highest score, faster. Though not deterministic, it's good enough. I have a list of 13 rolls of 5 die, like: ((1,2,3,4,5) #1 (1,2,6,3,4),#2 ... (1,4,3,2,2) #13 ) And a play (1,5,6,7,2,3,4,8,9,10,13,12,11) passed into score() returns a score for that play's permutation. How do I choose a good "neighboring state"? For random-restart, I can simply choose a random permutation of nos. 1-13, put them in a vector, and score them. In the traveling salesman problem, here's an example of a good neighboring state: "The neighbours of some particular permutation are the permutations that are produced for example by interchanging a pair of adjacent cities." I have a bad feeling about simply swapping two random vector positions, like so: (1,5,6,7, 2 , 3,4,8,9,10, 13, 12,11) # switch 2 and 13 (1,5,6,7, 13, 3,4,8,9,10, 2 , 12,11) # now score this one But I have no evidence and don't know how to select a good neighboring state. Anyone have any ideas on how to pick good neighboring states?

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • SSIS - Skip Missing Files

    - by Greg
    I have a SSIS 2008 package that calls about 10 other SSIS packages (legacy issues, don't ask). Each of those child packages loads a specific file into a table. But sometimes one or more of these input files will be missing. How can I let a child package fail (because a file is missing) but let the rest of the parent package keep on running? I've tried increasing the maximum error count on the parent package, the tasks in the parent package that call each child, and in the child package itself. None of that seemed to make any difference. I still get this error when I run it with a file missing: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. Edit: failpackageonfailure and faulparentonfailure are already all set to false everywhere.

    Read the article

  • Growing user control not updating

    - by user328259
    I am developing in C# and .Net 2.0. I have a user control that draws cells (columnar) depending upon the maximum number of cells. There are some drawing routines that generate the necessary cells. There is a property NumberOfCells that adjust the height of this control; CELLHEIGHT_CONSTANT * NumberOfCells. The OnPaint() method is overridden (code that draws the Number of cells). There is another user control that contains a panel which contains the userControl1 from above. There is a property NumberCells that changes userControl1's NumberOfCells. UserControl2 is then placed on a windows form. On that form there is a NumericUpDown control (only increments from 1). When the user increments by 1, I adjust the VerticalScroll.Maximum by 1 as well. Everything works well and good BUT when I increment once, the panel updates fine (inserts a vertical scrolll when necessary) but cells are not being added! I've tried Invalidating on userControl2 AND on the form but nothing seems to draw the newly added cells. Any assistance is appreciated. Thank you in advance. Lawrence

    Read the article

  • doubt in javascript name validation

    - by raja
    Hi: I am using the below validation for textbox which accepts only alphabets and maximum of 50 characters. I am passing the object directly in the parameter. The below case by giving the field name i.e "my_text" directly is working is working fine. But if i pass it in variable, that time it is not working(commented the if statement). Please help me. My requirement is each time when we enter the charater, the hardcode field name should not be used in the validation. <html><head> <script language=JavaScript> function check_length(my_form,fieldName) { alert(fieldName); // if (my_form.fieldName.value.length >= maxLen) { if (my_form.my_text.value.length >= maxLen) { var msg = "You have reached your maximum limit of characters allowed"; alert(msg); my_form.my_text.value = my_form.my_text.value.substring(0, maxLen); } else{ var keyCode = window.event.keyCode; if ((keyCode < 65 || keyCode > 90) && (keyCode < 97 || keyCode > 123) && keyCode != 32) { window.event.returnValue = false; alert("Enter only Alphabets"); } my_form.text_num.value = maxLen - my_form.my_text.value.length;} } </script> </head> <body> <form name=my_form method=post> <input type="text" onKeyPress=check_length(this.form,this.name); name=my_text rows=4 cols=30> <br> <input size=1 value=50 name=text_num> Characters Left </form> </body> </html>

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • Which Queue implementation to use in Java?

    - by devoured elysium
    I need to use a FIFO structure in my application. It needs to have at most 5 elements. I'd like to have something easy to use (I don't care for concurrency) that implements the Collection interface. I've tried the LinkedList, that seems to come from Queue, but it doesn't seem to allow me to set it's maximum capacity. It feels as if I just want at max 5 elements but try to add 20, it will just keep increasing in size to fit it. I'd like something that'd work the following way: XQueue<Integer> queue = new XQueue<Integer>(5); //where 5 is the maximum number of elements I want in my queue. for (int i = 0; i < 10; ++i) { queue.offer(i); } for (int i = 0; i < 5; ++i) { System.out.println(queue.poll()); } That'd print: 5 6 7 8 9 Thanks

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >