Search Results

Search found 2899 results on 116 pages for 'rate limiting'.

Page 97/116 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • Optimizing mathematics on arrays of floats in Ada 95 with GNATC

    - by mat_geek
    Consider the bellow code. This code is supposed to be processing data at a fixed rate, in one second batches, It is part of an overal system and can't take up too much time. When running over 100 lots of 1 seconds worth of data the program takes 35 seconds; or 35%. How do I improce the code to get the processing time down to a minimum? The code will be running on an Intel Pentium-M which is a P3 with SSE2. package FF is new Ada.Numerics.Generic_Elementary_Functions(Float); N : constant Integer := 820; type A is array(1 .. N) of Float; type A3 is array(1 .. 3) of A; procedure F(state : in out A3; result : out A3; l : in A; r : in A) is s : Float; t : Float; begin for i in 1 .. N loop t := l(i) + r(i); t := t / 2.0; state(1)(i) := t; state(2)(i) := t * 0.25 + state(2)(i) * 0.75; state(3)(i) := t * 1.0 /64.0 + state(2)(i) * 63.0 /64.0; for r in 1 .. 3 loop s := state(r)(i); t := FF."**"(s, 6.0) + 14.0; if t > MAX then t := MAX; elsif t < MIN then t := MIN; end if; result(r)(i) := FF.Log(t, 2.0); end loop; end loop; end;

    Read the article

  • Thread implemented as a Singleton

    - by rocknroll
    Hi all, I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure: void <ProtocolClassName>::run() { while(!mStop) //check whether screen is closed or not { mutex.lock() while(!waitcondition.wait(&mutex,5)) { if(mStop) return; } //Code for receiving and processing incoming data mutex.unlock(); } //end while } Hierarchy of GUI. 1.Login screen. 2. Screen of action. When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using <ProtocolName>::instance() function Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation. Thanks in advance A Hapless programmer

    Read the article

  • OpenGL behaving strangely

    - by Mk12
    OpenGL is acting very strangely for some reason. In my subclass of NSOpenGLView, I have the following code in -prepareOpenGL: - (void)prepareOpenGL { GLfloat lightAmbient[] = { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat lightDiffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f }; GLfloat lightPosition[] = { 0.0f, 0.0f, 2.0f }; quality = 0; zCoord = -6; [self loadTextures]; glEnable(GL_LIGHTING); glEnable(GL_TEXTURE_2D); glShadeModel(GL_SMOOTH); glClearColor(0.2f, 0.2f, 0.2f, 0.0f); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glLightfv(GL_LIGHT1, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT1, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT1, GL_POSITION, lightPosition); glEnable(GL_LIGHT1); gameState = kGameStateRunning; int i = 0; // HERE ******** [NSTimer scheduledTimerWithTimeInterval:0.03f target:self selector:@selector(processKeys) userInfo:nil repeats:YES]; // Synchronize buffer swaps with vertical refresh rate GLint swapInt = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; // Setup and start displayLink [self setupDisplayLink]; } I wanted to assign the timer that processes key input to an ivar so that I could invalidate it when the game is paused (and reinstantiate it on resume), however when I did that (as apposed to leaving it at [NSTimer scheduledTimer…), OpenGL doesn't display the cube I draw. When I take it away, it's fine. So i tried just adding a harmless statement, int i = 0; (maked // HERE *******), and that makes the lighting not work! When I take it away, everything is fine, but when I put it back, everything is dark. Can someone come up with a rational explanation for this? Thanks.

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Frameworks to manage dates (effective date and expiry dates)

    - by user214626
    Hello, We have an object that can has an effective date and expiry date.(Ex. i want to maintain the price of a commodity for a time period) Business Rules - Effective date is always a valid date (a datestamp) but, expiry date can be null to indicate that the object is active throughout. Also, both effective and expiry date can be set to some valid dates. Are there any frameworks that manage objects such that the objects are consistent,i.e there are no overlaps of the validity periods ? Ex. class XBOX { double price; Date effectiveDate; Date expiryDate; } XBOX x1 = new XBOX(400$, '2007-01-01','2008-12-31' ); XBOX x2 = new XBOX(200$, '2009-01-01',null ); Assume that we get a new rate from '2010-01-01' and a new XBOX object has to be created (to persist). Is there a framework/pattern that can do the following, so that the XBOX is consistent. x2.setExpiryDate('2009-12-31') XBOX x3 = new XBOX(150$, '2010-01-01',null ); Thanks in advance.

    Read the article

  • rs232 communication, general timing question

    - by Sunny Dee
    Hi, I have a piece of hardware which sends out a byte of data representing a voltage signal at a frequency of 100Hz over the serial port. I want to write a program that will read in the data so I can plot it. I know I need to open the serial port and open an inputstream. But this next part is confusing me and I'm having trouble understanding the process conceptually: I create a while loop that reads in the data from the inputstream 1 byte at a time. How do I get the while loop timing so that there is always a byte available to be read whenever it reaches the readbyte line? I'm guessing that I can't just put a sleep function inside the while loop to try and match it to the hardware sample rate. Is it just a matter of continuing reading the inputstream in the while loop, and if it's too fast then it won't do anything (since there's no new data), and if it's too slow then it will accumulate in the inputstream buffer? Like I said, i'm only trying to understand this conceptually so any guidance would be much appreciated! I'm guessing the idea is independent of which programming language I'm using, but if not, assume it is for use in Java. Thanks!

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Does Firefox on OS X Lion make use of Full Page Zoom for the touchpad? How to customize behavior?

    - by Steven Lu
    I really like the smooth pinch-zoom of Safari using the touchpad, but the two-finger scroll on Firefox is so much better than the scrolling performance in Safari. So I really like to use Firefox, but then I miss out on two-finger-double-tap to zoom to paragraph width, and the smooth pinch gesture zoom. What I'm wondering is if it is possible to write a Firefox Extension to improve the update rate of the full-page zoom in Firefox that is already functioning via the touchpad pinch gesture. I feel like it is specifically programmed to zoom at certain zoom levels: 100%, 120%, 150% (these are guesses of mine) but I think it would be great if I can get some more control there to make it work more like the zoom functionality in Safari. Also the two-finger-double-tap on a paragraph or element to zoom to it would be really awesome as well. https://developer.mozilla.org/en/Full_page_zoom This seems to indicate (if "full page zoom" is what I think it is) that an extension has the ability to zoom to an arbitrary scale factor, but what remains is to find out if it is possible to obtain or hook the touchpad pinch gesture. Update: I have updated the toolkit.zoomManager.zoomValues option in about:config to include more zoom levels: .3,.5,.67,.8,.9,1,1.01,1.02,1.03,1.04,1.05,1.06,1.07,1.08,1.09,1.1,1.2,1.33,1.5,1.7,2,2.4,3 Notice how I inserted a bunch of entries between 1 and 1.1. But it isn't switching between them any faster (why would it?) so it's less usable than before because of waiting for it to respond fast enough. It's clear that re-rendering the page at a different zoom level requires time and in order for the zoom to be dynamic, some kind of screen capture and scale effect must be performed (which is what Safari does). I guess such a thing is probably doable but I don't think I could pull it off. :-/

    Read the article

  • MySQL - How do I inner join sorting the joined data

    - by Gary
    I'm trying to write a report which will join a person, their work, and their hourly wage at the time of work. I cannot seem to figure out the best way to join the person's cost when the date is less than the date of the work. Let's say a person cost $30 per hour at the start of the year then got a $10 raise o Feb 5 and another on Mar 1. 01/01/2010 $30.00 (per hour) 02/05/2010 $40.00 03/01/2010 $45.00 The person put in hours several days which span the rasies. 01/05/2010 10 hours (should be at $30/hr) 01/27/2010 5 hours (again at $30) 02/10/2010 10 hours (at $40/hr) 03/03/2010 5 hours (at $45/hr) I'm trying to write one SQL statement which will pull the hours, the cost per hour, and the hours*cost. The cost is the hourly rate last entered into the system so the cost date is less than the work date, ordered by cost date limit 1. SELECT person.id, person.name, work.hours, person_costs.value, work.hours * person_costs.value AS value FROM person INNER JOIN work ON (person.id = work.person_id) INNER JOIN person_costs ON (person.id = person_costs.person_id AND person_costs.date < work.date) WHERE person.id = 1234 ORDER BY work.date ASC The problem I'm having, the person_costs isn't ordered by date in descending order. It's pulling out "any" value (naturally sorted by record position) which matches the condition. How do I select the first person_cost value which is older than the work date? Thanks!

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Streaming local file from PHP while it's been written to by a CURL process

    - by Fahim
    I am creating a simple Proxy server for my website. Why I am not using mod_proxy and mod_cache is a different discussion. Here's the code: shell_exec("nohup curl --create-dirs -o {$write_path} {$source_url} > /dev/null 2> /dev/null & echo $!"); sleep(1); $read_speed = 65.5; # 65.5 kb/s download rate $handle = fopen($write_path, "rb"); $content_type = select_meta_item($headers, 'Content-Type'); $file_size = select_meta_item($headers, 'Content-Length'); send_headers($content_type, $file_size); flush(); while (!feof($handle)) { echo fread($handle, round($read_speed * 1024)); flush(); sleep(1); } fclose($handle); Streaming an MP3 doesn't work using this method. Plays in Chrome, but not in Firefox. Initially I'll be using this to stream MP3 files through Long Tail's JW Player. If it all works out, I'll also be using this to send ZIP files.

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • How to configure RetryAdvice and ExceptionTranslation for Deadlocks using NHibernate and Spring

    - by zoidbeck
    Hi, i am using Spring.net 1.2 with NHibernate 2.0.1. Within my project i'am facing some Deadlock issues and besides the database tweaks to minimize the occurence i would like to implement Springs RetryAdvice to handle this. I can't find any working example how to configure a this. The reference seems to be clear about how to use it but somehow i can't get it working. <!--Used to translate NHibernate exception to Spring.DataAccessExceptions--> <object type="Spring.Dao.Attributes.PersistenceExceptionTranslationPostProcessor, Spring.Data"/> <!--ExceptionHandler performing Retry on Deadlocks--> <object name="ExceptionHandlingAdvice" type="Spring.Aspects.RetryAdvice, Spring.Aop"> <property name="retryExpression" value="on exception name DeadLockLoserException retry 3x rate (1*#n + 0.5)"/> </object> I have added the [Repository] attribute to my DAOs to get ExceptionTranslation enabled and tried to add the RetryAdvice to the TransactionProxyFactoryObject i am using but it won't work. I don't understand where to put this Advice. Do i have to declare a PointCut to add it or how could i get it to work as expected. Thx in advance - any help appreciated.

    Read the article

  • Getting PCM values of WAV files

    - by user2431088
    I have a .wav mono file (16bit,44.1kHz) and im using this code below. If im not wrong, this would give me an output of values between -1 and 1 which i can apply FFT on ( to be converted to a spectrogram later on). However, my output is no where near -1 and 1. This is a portion of my output 7.01214599609375 17750.2552337646 8308.42733764648 0.000274658203125 1.00001525878906 0.67291259765625 1.3458251953125 16.0000305175781 24932 758.380676269531 0.0001068115234375 This is the code which i got from another post Edit 1: public static Double[] prepare(String wavePath, out int SampleRate) { Double[] data; byte[] wave; byte[] sR = new byte[4]; System.IO.FileStream WaveFile = System.IO.File.OpenRead(wavePath); wave = new byte[WaveFile.Length]; data = new Double[(wave.Length - 44) / 4];//shifting the headers out of the PCM data; WaveFile.Read(wave, 0, Convert.ToInt32(WaveFile.Length));//read the wave file into the wave variable /***********Converting and PCM accounting***************/ for (int i = 0; i < data.Length; i += 2) { data[i] = BitConverter.ToInt16(wave, i) / 32768.0; } /**************assigning sample rate**********************/ for (int i = 24; i < 28; i++) { sR[i - 24] = wave[i]; } SampleRate = BitConverter.ToInt16(sR, 0); return data; }

    Read the article

  • How do I label a group of radio boxes for WCAG / 508 Compliance? Is ASP.NET doing it wrong?

    - by Mark Brittingham
    I am trying to bring an existing web site into greater conformance with WCAG 2.0 Guidelines and am a bit confused over the output emitted by Microsoft (ASP.NET 4.0 although it was the same in 3.5). Suppose you have a question like: "How would you rate your health?" and a set of 5 answers created using an ASP.NET RadioButtonList. I place the question in an asp:Label with an "AssociatedControlID" that matches the ID of the RadioButtonList (e.g. "SelfRatingBox"). Seems pretty easy... Only the output that is generated has an html "label" with a "For" that is equal to the ID of a table that wraps up the RadioButtons. I assumed that this would work with page readers but our 508 compliance guy is saying that the reader isn't associating the label with the radio controls. The WCAG guidelines indicate that you have to use a fieldset around the entire group and a legend to capture the associated text (the question). So what gives? It would be ideal if MS could take my label and the radiobuttonlist and generate the appropriate fieldset and legend tags but it seems pretty clear that to achieve WCAG compliance, I'll have to roll my own. Is this correct or am I missing something?

    Read the article

  • Help with MySQL Join Statement

    - by JasonS
    Hi, I just built a website and have realised that I need to have a top 3 highest rated albums.. I haven't built in something that keeps track of the ratings. Ratings are stored separately. Can someone show me how to put these together please. SELECT id, name FROM albums LIMIT 3 SELECT rating FROM ratings WHERE url=CONCAT('albums/show/', album.id) Let me just flesh it out a bit. I need to get back the following: From the albums table. id, name. From the ratings table I need to get back the average rating. ROUND((rating+rating+rating) / total ratings) The ratings. Users can rate everything on my website so I have a generic ratings table. The rating is stored with the url of the page it applies to. Hence, to get album ratings I need to have 'albums/show/{album_id}'. In hind sight I should have had a type and id field but it is a bit late now with a lunch iminient. Any help is much appreciated.

    Read the article

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • Algorithm To Select Most Popular Places from Database

    - by Russell C.
    We have a website that contains a database of places. For each place our users are able to take one of the follow actions which we record: VIEW - View it's profile RATING - Rate it on a scale of 1-5 stars REVIEW - Review it COMPLETED - Mark that they've been there WISH LIST - Mark that they want to go there FAVORITE - Mark that it's one of their favorites In our database table of places each place contains a count of the number of times each action above was taken as well as the average rating given by users. views ratings avg_rating completed wishlist favorite What we want to be able to do is generate lists of the top places using the above information. Ideally, we would want to be able to generate this list using a relatively simple SQL query without needing to do any legwork to calculate additional fields or stack rank places against one another. That being said, since we only have about 50,000 places we could run a nightly cron job to calculate some fields such as rankings on different categories if it would make a meaningful difference in the overall results of our top places. I'd appreciate if you could make some suggestions on how we should think about bubbling the best places to the top, which criteria we should weight more heavily, and given that information - suggest what the MySQL query would need to look like in order to select the top 10 places. One thing to note is that at this time we are less concerned with the recency of a place being popular - meaning that looking at the aggregate information is fine and that more recent data doesn't need to be weighted more heavily. Thanks in advance for your help & advice!

    Read the article

  • A more elegant way to start a multithread alarm in Rebol VID ? (What's the equivalent of load event?

    - by Rebol Tutorial
    I want to trigger an alarm when the GUI starts. I can't see what's the equivalent of load event of other language in Rebol VID, so I put it in the periodic handler which is quite circumvoluted. So how to do this more cleanly ? alarm-data: none set-alarm: func [ "Set alarm for future time." seconds "Seconds from now to ring alarm." message [string! unset!] "Message to print on alarm." ] [ alarm-data: reduce [now/time + seconds message] ] ring: func [ "Action for when alarm comes due." message [string! unset!] ] [ set-face monitor either message [message]["RIIIING!"] ; Your sound playing can also go here (my computer doesn't have speakers). ] periodic: func [ "Called every second, checks alarms." fact action event ] [ either alarm-data [ ; Update alarm countdown. set-face monitor rejoin [ "Alarm will ring in " to integer! alarm-data/1 - now/time " seconds." ] ; Check alarm. if now/time > alarm-data/1 [ ring alarm-data/2 ;alarm-data: none ; Reset once fired. ] ][ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] ] ] alarm: func[seconds message [string! unset!]][ system/words/seconds: seconds if value? 'message [ system/words/message: message ] view layout [ monitor: text 256 "" rate 1 feel [engage: :periodic] button 256 "re/start countdown" [ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] set-face monitor "Alarm set." ] ] ]

    Read the article

  • moving a sprite causes jerky movement

    - by mac_55
    I've got some jerky movement of my sprite. Basically, when the user touches a point on the screen, the sprite should move to that point. This is working mostly fine... it's even taking into account a delta - because frame rate may not be consistant. However, I notice that the y movement usually finishes before the x movement (even when the distances to travel are the same), so it appears like the sprite is moving in an 'L' shape rather than a smooth diagonal line. Vertical and horizontal velocity (vx, vy) are both set to 300. Any ideas what's wrong? How can I go about getting my sprite to move in a smooth diagonal line? - (void)update:(ccTime)dt { int x = self.position.x; int y = self.position.y; //if ball is to the left of target point if (x<targetx) { //if movement of the ball won't take it to it's target position if (x+(vx *dt) < targetx) { x += vx * dt; } else { x = targetx; } } else if (x>targetx) //same with x being too far to the right { if (x-(vx *dt) > targetx) { x -= vx * dt; } else { x = targetx; } } if (y<targety) { if (y+(vy*dt)<targety) { y += vy * dt; } else { y = targety; } } else if (y>targety) { if (y-(vy*dt)>targety) { y -= vy * dt; } else { y = targety; } } self.position = ccp(x,y); }

    Read the article

  • GridView's NewValues and OldValues empty in the OnRowUpdating event.

    - by Abe Miessler
    I have the GridView below. I am binding to a custom datasource in the code behind. It gets into the "OnRowUpdating" event just fine, but there are no NewValues or OldValues. Any suggestions as to how I can get these values? <asp:GridView ID="gv_Personnel" runat="server" OnRowDataBound="gv_Personnel_DataBind" OnRowCancelingEdit="gv_Personnel_CancelEdit" OnRowEditing="gv_Personnel_EditRow" OnRowUpdating="gv_Personnel_UpdateRow" AutoGenerateColumns="false" ShowFooter="true" DataKeyNames="BudgetLineID" AutoGenerateEditButton="true" AutoGenerateDeleteButton="true" > <Columns> <asp:BoundField HeaderText="Level of Staff" DataField="LineDescription" /> <%--<asp:BoundField HeaderText="Hrs/Units requested" DataField="NumberOfUnits" />--%> <asp:TemplateField HeaderText="Hrs/Units requested"> <ItemTemplate> <%# Eval("NumberOfUnits")%> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="tb_NumUnits" runat="server" Text='<%# Bind("NumberOfUnits")%>' /> </EditItemTemplate> </asp:TemplateField> <asp:BoundField HeaderText="Hrs/Units of Applicant Cost Share" DataField="" NullDisplayText="0" /> <asp:BoundField HeaderText="Hrs/Units of Partner Cost Share" DataField="" NullDisplayText="0" /> <asp:BoundField FooterStyle-Font-Bold="true" FooterText="TOTAL PERSONNEL SERVICES:" HeaderText="Rate" DataFormatString="{0:C}" DataField="UnitPrice" /> <asp:TemplateField HeaderText="Amount Requested" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> <asp:TemplateField HeaderText="Applicant Cost Share" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> <asp:TemplateField HeaderText="Partner Cost Share" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> <asp:TemplateField HeaderText="Total Projet Cost" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> </Columns> </asp:GridView>

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >