Search Results

Search found 585 results on 24 pages for 'numerical equations'.

Page 19/24 | < Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • Is there a problem with scrollTop in Chrome?

    - by Shaun
    I am setting scrollTop and scrollLeft for a div that I am working with. The code looks like this: div.scrollLeft = content.cx*scalar - parseInt(div.style.width)/2; div.scrollTop = content.cy*scalar - parseInt(div.style.height)/2; This works just fine in FF, but only scrollLeft works in chrome. As you can see, the two use almost identical equations and as it works in FF I am just wondering if this is a problem with Chrome? Update: If I switch the order of the assignments then scrollTop will work and scrollLeft won't. <div id="container" style = "height:600px; width:600px; overflow:auto;" onscroll = "updateCenter()"> <script> var div = document.getElementById('container'); function updateCenter() { svfdim.cx = (div.scrollLeft + parseFloat(div.style.width)/2)/scalar; svfdim.cy = (div.scrollTop + parseFloat(div.style.height)/2)/scalar; } function updateScroll(svfdim, scalar, div) { div.scrollTop = svgdim.cy*scalar - parseFloat(div.style.height)/2; div.scrollLeft = svgdim.cx*scalar - parseFloat(div.style.width)/2; } function resizeSVG(Root) { Root.setAttribute("height", svfdim.height*scalar); Root.setAttribute("width", svfdim.width*scalar); updateScroll(svgdim, scalar, div); } </script>

    Read the article

  • Mathematically Find Max Value without Conditional Comparison

    - by Cnich
    ----------Updated ------------ codymanix and moonshadow have been a big help thus far. I was able to solve my problem using the equations and instead of using right shift I divided by 29. Because with 32bits signed 2^31 = overflows to 29. Which works! Prototype in PHP $r = $x - (($x - $y) & (($x - $y) / (29))); Actual code for LEADS (you can only do one math function PER LINE!!! AHHHH!!!) DERIVDE1 = IMAGE1 - IMAGE2; DERIVED2 = DERIVED1 / 29; DERIVED3 = DERIVED1 AND DERIVED2; MAX = IMAGE1 - DERIVED3; ----------Original Question----------- I don't think this is quite possible with my application's limitations but I figured it's worth a shot to ask. I'll try to make this simple. I need to find the max values between two numbers without being able to use a IF or any conditional statement. In order to find the the MAX values I can only perform the following functions Divide, Multiply, Subtract, Add, NOT, AND ,OR Let's say I have two numbers A = 60; B = 50; Now if A is always greater than B it would be simple to find the max value MAX = (A - B) + B; ex. 10 = (60 - 50) 10 + 50 = 60 = MAX Problem is A is not always greater than B. I cannot perform ABS, MAX, MIN or conditional checks with the scripting applicaiton I am using. Is there any way possible using the limited operation above to find a value VERY close to the max?

    Read the article

  • Reinforcement learning toy project

    - by Betamoo
    My toy project to learn & apply Reinforcement Learning is: - An agent tries to reach a goal state "safely" & "quickly".... - But there are projectiles and rockets that are launched upon the agent in the way. - The agent can determine rockets position -with some noise- only if they are "near" - The agent then must learn to avoid crashing into these rockets.. - The agent has -rechargable with time- fuel which is consumed in agent motion - Continuous Actions: Accelerating forward - Turning with angle I need some hints and names of RL algorithms that suit that case.. - I think it is POMDP , but can I model it as MDP and just ignore noise? - In case POMDP, What is the recommended way for evaluating probability? - Which is better to use in this case: Value functions or Policy Iterations? - Can I use NN to model environment dynamics instead of using explicit equations? - If yes, Is there a specific type/model of NN to be recommended? - I think Actions must be discretized, right? I know it will take time and effort to learn such a topic, but I am eager to.. You may answer some of the questions if you can not answer all... Thanks

    Read the article

  • sample java code for approximate string matching or boyer-moore extended for approximate string matc

    - by Dolphin
    Hi I need to find 1.mismatch(incorrectly played notes), 2.insertion(additional played), & 3.deletion (missed notes), in a music piece (e.g. note pitches [string values] stored in a table) against a reference music piece. This is either possible through exact string matching algorithms or dynamic programming/ approximate string matching algos. However I realised that approximate string matching is more appropriate for my problem due to identifying mismatch, insertion, deletion of notes. Or an extended version of Boyer-moore to support approx. string matching. Is there any link for sample java code I can try out approximate string matching? I find complex explanations and equations - but I hope I could do well with some sample code and simple explanations. Or can I find any sample java code on boyer-moore extended for approx. string matching? I understand the boyer-moore concept, but having troubles with adjusting it to support approx. string matching (i.e. to support mismatch, insertion, deletion). Also what is the most efficient approx. string matching algorithm (like boyer-moore in exact string matching algo)? Greatly appreciate any insight/ suggestions. Many thanks in advance

    Read the article

  • LaTex, align alignment characters between align blocks

    - by ccook
    I would like to align two alignment characters between two align blocks so that I can have some text in the middle of a derivation with equations maintaining the horizontal alignment. For example the following excerpt of latex using align \begin{align*} \frac{\delta \phi}{\delta x_1} = {} &\frac{9}{8}\frac{\delta_1\phi}{\delta_1x_1}-\frac{1}{8}\frac{\delta_3\phi}{\delta_3x_1} \\ & \frac{9}{8}\frac{1}{h_1}\left[\phi(x_1+h_1/2)-\phi(x_i-h_1/2)\right]-\frac{1}{8}\frac{1}{3h_1}\left[\phi(x_i+3h_1/2)-\phi(x_1-3h_1/2)\right] \end{align*} some text in the middle \begin{align*} & \frac{9}{8}\frac{1}{h_1}\left[\phi(x_1+h_1/2)-\phi(x_i-h_1/2)\right]-\frac{1}{8}\frac{1}{3h_1}\left[\phi(x_i+3h_1/2)-\phi(x_1-3h_1/2)\right] \end{align*} Ideally I would like the left of the equation in the second block to line up with that of the second equation in the first block. I could do a workaround by not having text in the middle, however, I would like this functionality. EDIT I would like to have a good amount of text between. Say three to four lines that line up as normal paragraphs. Adding text in the alignment block is the workaround I poorly alluded to.

    Read the article

  • Confused over behavior of List.mapi in F#

    - by James Black
    I am building some equations in F#, and when working on my polynomial class I found some odd behavior using List.mapi Basically, each polynomial has an array, so 3*x^2 + 5*x + 6 would be [|6, 5, 3|] in the array, so, when adding polynomials, if one array is longer than the other, then I just need to append the extra elements to the result, and that is where I ran into a problem. Later I want to generalize it to not always use a float, but that will be after I get more working. So, the problem is that I expected List.mapi to return a List not individual elements, but, in order to put the lists together I had to put [] around my use of mapi, and I am curious why that is the case. This is more complicated than I expected, I thought I should be able to just tell it to make a new List starting at a certain index, but I can't find any function for that. type Polynomial() = let mutable coefficients:float [] = Array.empty member self.Coefficients with get() = coefficients static member (+) (v1:Polynomial, v2:Polynomial) = let ret = List.map2(fun c p -> c + p) (List.ofArray v1.Coefficients) (List.ofArray v2.Coefficients) let a = List.mapi(fun i x -> x) match v1.Coefficients.Length - v2.Coefficients.Length with | x when x < 0 -> ret :: [((List.ofArray v1.Coefficients) |> a)] | x when x > 0 -> ret :: [((List.ofArray v2.Coefficients) |> a)] | _ -> [ret]

    Read the article

  • Mathematics for Computer Science Students

    - by Ender
    To cut a long story short, I am a CS student that has received no formal Post-16 Maths education for years. Right now even my Algebra is extremely rusty and I have a couple of months to shape up my skills. I've got a couple of video lectures in my bookmarks, consisting of: Pre-Calculus Algebra Calculus Probability Introduction to Statistics Differential Equations Linear Algebra My aim as of today is to be able to read the CLRS book Introduction to Algorithms and be able to follow the Mathematical notation in that, as well as being able to confidently read and back-up any arguments written in Mathematical notation. Aside from these video lectures, can anyone recommend any good books to help teach someone wishing to go from a low-foundation level to a more advanced level of Mathematics? Just as a note, I've taken a first-year module in Analytical Modelling, so I understand some of the basic concepts of Discrete Mathematics. EDIT: Just a note to those that are looking to learn Linear Algebra using the Video Lectures I have posted up. Peteris Krumins' Blog contains a run-through of these lecture notes as well as his own commentary and lecture notes, an invaluable resource for those looking to follow the lectures too.

    Read the article

  • Hausman Test, Fixed/random effects in SAS?

    - by John
    Hey guys, I'm trying to do a fixed effecs OLS regression, a random effects OLS Regression and a Hausman test to back up my choice for one of those models. Alas, there does not seem to be a lot of information of what the code looks like when you want to do this. I found for the Hausman test that proc model data=one out=fiml2; endogenous y1 y2; y1 = py2 * y2 + px1 * x1 + interc; y2 = py1* y1 + pz1 * z1 + d2; fit y1 y2 / ols 2sls hausman; instruments x1 z1; run; you do something like this. However, I do not have the equations in the middle, which i assume to be the fixed and random effects models? On an other site I found that PROC TSCSREG automatically displays the Hausman test, unfortunately this does not work either. When I type PROC TSCSREG data = clean; data does not become blue meaning SAS does not recognize this as a type of data input? proc tscsreg data = clean; var nof capm_erm sigma cv fvyrgro meanest tvol bmratio size ab; run; I tried this but obviously doesn't work since it does not recognize the data input, I've been searching but I can't seem to find a proper example of how the code of an hausman test looks like. On the SAS site I neither find the code one has to use to perform a fixed/random effects model. My data has 1784 observations, 578 different firms (cross section?) and spans over a 2001-2006 period in months. Any help?

    Read the article

  • Ray Generation Inconsistency

    - by Myx
    I have written code that generates a ray from the "eye" of the camera to the viewing plane some distance away from the camera's eye: R3Ray ConstructRayThroughPixel(...) { R3Point p; double increments_x = (lr.X() - ul.X())/(double)width; double increments_y = (ul.Y() - lr.Y())/(double)height; p.SetX( ul.X() + ((double)i_pos+0.5)*increments_x ); p.SetY( lr.Y() + ((double)j_pos+0.5)*increments_y ); p.SetZ( lr.Z() ); R3Vector v = p-camera_pos; R3Ray new_ray(camera_pos,v); return new_ray; } ul is the upper left corner of the viewing plane and lr is the lower left corner of the viewing plane. They are defined as follows: R3Point org = scene->camera.eye + scene->camera.towards * radius; R3Vector dx = scene->camera.right * radius * tan(scene->camera.xfov); R3Vector dy = scene->camera.up * radius * tan(scene->camera.yfov); R3Point lr = org + dx - dy; R3Point ul = org - dx + dy; Here, org is the center of the viewing plane with radius being the distance between the viewing plane and the camera eye, dx and dy are the displacements in the x and y directions from the center of the viewing plane. The ConstructRayThroughPixel(...) function works perfectly for a camera whose eye is at (0,0,0). However, when the camera is at some different position, not all needed rays are produced for the image. Any suggestions what could be going wrong? Maybe something wrong with my equations? Thanks for the help.

    Read the article

  • Code bacteria: evolving mathematical behavior

    - by Stefano Borini
    It would not be my intention to put a link on my blog, but I don't have any other method to clarify what I really mean. The article is quite long, and it's in three parts (1,2,3), but if you are curious, it's worth the reading. A long time ago (5 years, at least) I programmed a python program which generated "mathematical bacteria". These bacteria are python objects with a simple opcode-based genetic code. You can feed them with a number and they return a number, according to the execution of their code. I generate their genetic codes at random, and apply an environmental selection to those objects producing a result similar to a predefined expected value. Then I let them duplicate, introduce mutations, and evolve them. The result is quite interesting, as their genetic code basically learns how to solve simple equations, even for values different for the training dataset. Now, this thing is just a toy. I had time to waste and I wanted to satisfy my curiosity. however, I assume that something, in terms of research, has been made... I am reinventing the wheel here, I hope. Are you aware of more serious attempts at creating in-silico bacteria like the one I programmed? Please note that this is not really "genetic algorithms". Genetic algorithms is when you use evolution/selection to improve a vector of parameters against a given scoring function. This is kind of different. I optimize the code, not the parameters, against a given scoring function.

    Read the article

  • How to automatically read in calculated values with PHPExcel?

    - by Edward Tanguay
    I have the following Excel file: I read it in by looping over every cell and getting the value with getCell(...)->getValue(): $highestColumnAsLetters = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestColumn(); //e.g. 'AK' $highestRowNumber = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestRow(); $highestColumnAsLetters++; for ($row = 1; $row < $highestRowNumber + 1; $row++) { $dataset = array(); for ($columnAsLetters = 'A'; $columnAsLetters != $highestColumnAsLetters; $columnAsLetters++) { $dataset[] = $this->objPHPExcel->setActiveSheetIndex(0)->getCell($columnAsLetters.$row)->getValue(); if ($row == 1) { $this->column_names[] = $columnAsLetters; } } $this->datasets[] = $dataset; } However, although it reads in the data fine, it reads in the calculations literally: I understand from discussions like this one that I can use getCalculatedValue() for calculated cells. The problem is that in the Excel sheets I am importing, I do not know beforehand which cells are calculated and which are not. Is there a way for me to read in the value of a cell in a way that automatically gets the value if it has a simple value and gets the result of the calculation if it is a calculation? Answer: It turns out that getCalculatedValue() works for all cells, makes me wonder why this isn't the default for getValue() since I would think one would usually want the value of the calculations instead of the equations themselves, in any case this works: ...->getCell($columnAsLetters.$row)->getCalculatedValue();

    Read the article

  • Linear regression confidence intervals in SQL

    - by Matt Howells
    I'm using some fairly straight-forward SQL code to calculate the coefficients of regression (intercept and slope) of some (x,y) data points, using least-squares. This gives me a nice best-fit line through the data. However we would like to be able to see the 95% and 5% confidence intervals for the line of best-fit (the curves below). What these mean is that the true line has 95% probability of being below the upper curve and 95% probability of being above the lower curve. How can I calculate these curves? I have already read wikipedia etc. and done some googling but I haven't found understandable mathematical equations to be able to calculate this. Edit: here is the essence of what I have right now. --sample data create table #lr (x real not null, y real not null) insert into #lr values (0,1) insert into #lr values (4,9) insert into #lr values (2,5) insert into #lr values (3,7) declare @slope real declare @intercept real --calculate slope and intercept select @slope = ((count(*) * sum(x*y)) - (sum(x)*sum(y)))/ ((count(*) * sum(Power(x,2)))-Power(Sum(x),2)), @intercept = avg(y) - ((count(*) * sum(x*y)) - (sum(x)*sum(y)))/ ((count(*) * sum(Power(x,2)))-Power(Sum(x),2)) * avg(x) from #lr Thank you in advance.

    Read the article

  • Does SetFileBandwidthReservation affect memory-mapped file performance?

    - by Ghostrider
    Does this function affect Memory-mapped file performance? Here's the problem I need to solve: I have two applications competing for disk access: "reader" and "updater". Whole system runs on Windows Server 2008 R2 x64 "Updater" constantly accesses disk in a linear manner, updating data. They system is set up in such a way that updater always has infinite data to update. Consider that it is constantly approximating a solution of a huge set of equations that takes up entire 2TB disk drive. Updater uses ReadFile and WriteFile to process data in a linear fashion. "Reader" is occasionally invoked by user to get some pieces of data. Usually user would read several 4kb blocks from the drive and stop. Occasionally user needs to read up to 100mb sequentially. In exceptional cases up to several gigabytes. Reader maps files to memory to get data it needs. What I would like to achieve is for "reader" to have absolute priority so that "updater" would completely stop if needed so that "reader" could get the data user needs ASAP. Is this problem solvable by using SetPriorityClass and SetFileBandwidthReservation calls? I would really hate to put synchronization login in "reader" and "updater" and rather have the OS take care of priorities.

    Read the article

  • Accelerometer gravity components

    - by Dvd
    Hi, I know this question is definitely solved somewhere many times already, please enlighten me if you know of their existence, thanks. Quick rundown: I want to compute from a 3 axis accelerometer the gravity component on each of these 3 axes. I have used 2 axes free body diagrams to work out the accelerometer's gravity component in the world X-Z, Y-Z and X-Y axes. But the solution seems slightly off, it's acceptable for extreme cases when only 1 accelerometer axis is exposed to gravity, but for a pitch and roll of both 45 degrees, the combined total magnitude is greater than gravity (obtained by Xa^2+Ya^2+Za^2=g^2; Xa, Ya and Za are accelerometer readings in its X, Y and Z axis). More detail: The device is a Nexus One, and have a magnetic field sensor for azimuth, pitch and roll in addition to the 3-axis accelerometer. In the world's axis (with Z in the same direction as gravity, and either X or Y points to the north pole, don't think this matters much?), I assumed my device has a pitch (P) on the Y-Z axis, and a roll (R) on the X-Z axis. With that I used simple trig to get: Sin(R)=Ax/Gxz Cos(R)=Az/Gxz Tan(R)=Ax/Az There is another set for pitch, P. Now I defined gravity to have 3 components in the world's axis, a Gxz that is measurable only in the X-Z axis, a Gyz for Y-Z, and a Gxy for X-Y axis. Gxz^2+Gyz^2+Gxy^2=2*G^2 the 2G is because gravity is effectively included twice in this definition. Oh and the X-Y axis produce something more exotic... I'll explain if required later. From these equations I obtained a formula for Az, and removed the tan operations because I don't know how to handle tan90 calculations (it's infinity?). So my question is, anyone know whether I did this right/wrong or able to point me to the right direction? Thanks! Dvd

    Read the article

  • How to handle alpha in a manual "Overlay" blend operation?

    - by quixoto
    I'm playing with some manual (walk-the-pixels) image processing, and I'm recreating the standard "overlay" blend. I'm looking at the "Photoshop math" macros here: http://www.nathanm.com/photoshop-blending-math/ (See also here for more readable version of Overlay) Both source images are in fairly standard RGBA (8 bits each) format, as is the destination. When both images are fully opaque (alpha is 1.0), the result is blended correctly as expected: But if my "blend" layer (the top image) has transparency in it, I'm a little flummoxed as to how to factor that alpha into the blending equation correctly. I expect it to work such that transparent pixels in the blend layer have no effect on the result, opaque pixels in the blend layer do the overlay blend as normal, and semitransparent blend layer pixels have some scaled effect on the result. Can someone explain to me the blend equations or the concept behind doing this? Bonus points if you can help me do it such that the resulting image has correctly premultiplied alpha (which only comes into play for pixels that are not opaque in both layers, I think.) Thanks! // factor in blendLayerA, (1-blendLayerA) somehow? resultR = ChannelBlend_Overlay(baseLayerR, blendLayerR); resultG = ChannelBlend_Overlay(baseLayerG, blendLayerG); resultB = ChannelBlend_Overlay(baseLayerB, blendLayerB); resultA = 1.0; // also, what should this be??

    Read the article

  • How to Implement Overlay blend method using opengles 1.1

    - by Cylon
    Blow is the algorithm of overlay. and i want using it on iphone, but iphone 3g only support opengles 1.1, can not using glsl. can i using blend function or texture combine to implement it. thank you /////////Reference from OpenGL Shading® Language Third Edition /////////// 19.6.12 Overlay OVERLAY first computes the luminance of the base value. If the luminance value is less than 0.5, the blend and base values are multiplied together. If the luminance value is greater than 0.5, a screen operation is performed. The effect is that the base value is mixed with the blend value, rather than being replaced. This allows patterns and colors to overlay the base image, but shadows and highlights in the base image are preserved. A discontinuity occurs where luminance = 0.5. To provide a smooth transition, we actually do a linear blend of the two equations for luminance in the range [0.45,0.55]. float luminance = dot(base, lumCoeff); if (luminance < 0.45) result = 2.0 * blend * base; else if (luminance 0.55) result = white - 2.0 * (white - blend) * (white - base); else { vec4 result1 = 2.0 * blend * base; vec4 result2 = white - 2.0 * (white - blend) * (white - base); result = mix(result1, result2, (luminance - 0.45) * 10.0); }

    Read the article

  • Simple encryption - Sum of Hashes in C

    - by Dogbert
    I am attempting to demonstrate a simple proof of concept with respect to a vulnerability in a piece of code in a game written in C. Let's say that we want to validate a character login. The login is handled by the user choosing n items, (let's just assume n=5 for now) from a graphical menu. The items are all medieval themed: eg: _______________________________ | | | | | Bow | Sword | Staff | |-----------|-----------|-------| | Shield | Potion | Gold | |___________|___________|_______| The user must click on each item, then choose a number for each item. The validation algorithm then does the following: Determines which items were selected Drops each string to lowercase (ie: Bow becomes bow, etc) Calculates a simple string hash for each string (ie: `bow = b=2, o=15, w=23, sum = (2+15+23=40) Multiplies the hash by the value the user selected for the corresponding item; This new value is called the key Sums together the keys for each of the selected items; this is the final validation hash IMPORTANT: The validator will accept this hash, along with non-zero multiples of it (ie: if the final hash equals 1111, then 2222, 3333, 8888, etc are also valid). So, for example, let's say I select: Bow (1) Sword (2) Staff (10) Shield (1) Potion (6) The algorithm drops each of these strings to lowercase, calculates their string hashes, multiplies that hash by the number selected for each string, then sums these keys together. eg: Final_Validation_Hash = 1*HASH(Bow) + 2*HASH(Sword) + 10*HASH(Staff) + 1*HASH(Shield) + 6*HASH(Potion) By application of Euler's Method, I plan to demonstrate that these hashes are not unique, and want to devise a simple application to prove it. in my case, for 5 items, I would essentially be trying to calculate: (B)(y) = (A_1)(x_1) + (A_2)(x_2) + (A_3)(x_3) + (A_4)(x_4) + (A_5)(x_5) Where: B is arbitrary A_j are the selected coefficients/values for each string/category x_j are the hash values for each string/category y is the final validation hash (eg: 1111 above) B,y,A_j,x_j are all discrete-valued, positive, and non-zero (ie: natural numbers) Can someone either assist me in solving this problem or point me to a similar example (ie: code, worked out equations, etc)? I just need to solve the final step (ie: (B)(Y) = ...). Thank you all in advance.

    Read the article

  • Assign sage variable values into R objects via sagetex and Sweave

    - by sheed03
    I am writing a short Sweave document that outputs into a Beamer presentation, in which I am using the sagetex package to solve an equation for two parameters in the beta binomial distribution, and I need to assign the parameter values into the R session so I can do additional processing on those values. The following code excerpt shows how I am interacting with sage: <<echo=false,results=hide>>= mean.raw <- c(5, 3.5, 2) theta <- 0.5 var.raw <- mean.raw + ((mean.raw^2)/theta) @ \begin{frame}[fragile] \frametitle{Test of Sage 2} \begin{sagesilent} var('a1, b1, a2, b2, a3, b3') eqn1 = [1000*a1/(a1+b1)==\Sexpr{mean.raw[1]}, ((1000*a1*b1)*(1000+a1+b1))/((a1+b1)^2*(a1+b1+1))==\Sexpr{var.raw[1]}] eqn2 = [1000*a2/(a2+b2)==\Sexpr{mean.raw[2]}, ((1000*a2*b2)*(1000+a2+b2))/((a2+b2)^2*(a2+b2+1))==\Sexpr{var.raw[2]}] eqn3 = [1000*a3/(a3+b3)==\Sexpr{mean.raw[3]}, ((1000*a3*b3)*(1000+a3+b3))/((a3+b3)^2*(a3+b3+1))==\Sexpr{var.raw[3]}] s1 = solve(eqn1, a1,b1) s2 = solve(eqn2, a2,b2) s3 = solve(eqn3, a3,b3) \end{sagesilent} Solutions of Beta Binomial Parameters: \begin{itemize} \item $\sage{s1[0]}$ \item $\sage{s2[0]}$ \item $\sage{s3[0]}$ \end{itemize} \end{frame} Everything compiles just fine, and in that slide I am able to see the solutions to the three equations respective parameters in that itemized list (for example the first item in the itemized list from that beamer slide is outputted as [a1=(328/667), b1=(65272/667)] (I am not able to post an image of the beamer slide but I hope you get the idea). I would like to save the parameter values a1,b1,a2,b2,a3,b3 into R objects so that I can use them in simulations. I cannot find any documentation in the sagetex package on how to save output from sage commands into variables for use with other programs (in this case R). Any suggestions on how to get these values into R?

    Read the article

  • Collision of dot and line in 2D space

    - by Anderiel
    So i'm trying to make my first game on android. The thing is i have a small moving ball and i want it to bounce from a line that i drew. For that i need to find if the x,y of the ball are also coordinates of one dot from the line. I tried to implement these equations about lines x=a1 + t*u1 y=a2 + t*u2 = (x-a1)/u1=(y-a2)/u2 (t=t which has to be if the point is on the line) where x and y are the coordinates im testing, dot[a1,a2] is a dot that is on the line and u(u1,u2) is the vector of the line. heres the code: public boolean Collided() { float u1 =Math.abs(Math.round(begin_X)-Math.round(end_X)); float u2 =Math.abs(Math.round(begin_Y)-Math.round(end_Y)); float t_x =Math.round((elect_X - begin_X)/u1); float t_y =Math.round((elect_Y - begin_Y)/u2); if(t_x==t_y) { return true; } else { return false; } } points [begin_X,end_X] and [begin_Y,end_Y] are the two points from the line and [elect_X,elect_Y] are the coordinates of the ball theoreticaly it should work, but in the reality the ball most of the time just goes straigth through the line or bounces somewhere else where it shouldnt

    Read the article

  • How do I become better in math, after being a programmer for several years.

    - by loxs
    I've had quite a weird career till now. First I graduated from a medical school. Then I went into marketing (pharmaceuticals). And then umm, after some time, I decided to go for my (till then) hobby and became a "professional" programmer. I've been quite successful at this ever since. I have quite some languages "under my belt". I earn not bad and I have been involved in the opensource community quite heavily. The thing is that I suck at math :). Well, not totally of course, as I get my work done. But I don't know how much I suck. And I don't know how to find out. Math has never really been of any priority during my middle/high school years. I only picked as little as I could afford, because I was always getting ready to go for Medicine. Of course I know the basics of algebra. Things like "normal" and square equations. Also the basics of geometry. But well, there are things that I have missed. And lately I am being fascinated by things like probability theory, infinity, chaos/order etc. But every time I try to learn something about these topics, I hit a wall of terminology, special symbols, and some special kind of thinking, that is quite like mine (a programmer), but also a lot different (and appears weird to me). So, what kinds of books would you recommend me? It's very hard to find something suitable. All that I find are either too easy (and boring) or totally impenetrable.

    Read the article

  • Generic applet style system for publishing mathematics demonstrations?

    - by Alex
    Anyone who's tried to study mathematics using online resources will have come across these Java applets that demonstrate a particular mathematical idea. Examples: http://www.math.ucla.edu/~tao/java/Mobius.html http://www.mathcs.org/java/programs/FFT/index.html I love the idea of this interactive approach because I believe it is very helpful in conveying mathematical principles. I'd like to create a system for visually designing and publishing these 'mathlets' such that they can be created by teachers with little programming experience. So in order to create this app, i'll need a GUI and a 'math engine'. I'll probably be working with .NET because thats what I know best and i'd like to start experimenting with F#. Silverlight appeals to me as a presentation framework for this project (im not worried about interoperability right now). So my questions are: does anything like this exist already in full form? are there any GUI frameworks for displaying mathematical objects such as graphs & equations? are there decent open source libraries that exposes a mathematical framework (Math.NET looks good, just wondering if there is anything else out there) is there any existing work on taking mathematical models/demos built with maple/matlab/octave/mathematica etc and publishing them to the web?

    Read the article

  • Solving problems with near infinite potential solutions

    - by Zonda333
    Today I read the following problem: Use the digits 2, 0, 1, 1 and the operations +, -, x, ÷, sqrt, ^ , !, (), combinations, and permutations to write equations for the counting numbers 1 through 100. All four digits must be used in each expression. Only the digits 2, 0, 1, 1 may be used, and each must be used exactly once. Decimals may be used, as in .1, .02, etc. Digits may be combined; numbers such as 20 or 101 may be used. Example: 60 = 10*(2+1)!, 54 = ¹¹C2 - 0! Though I was able to quickly find around 50 solutions quite easily in my head, I thought programming it would be a far superior solution. However, I then realized I had no clue how to go about solving a problem like this. I am not asking for complete code for me to copy and paste, but for ideas about how I would solve this problems, and others like it that have nearly infinite potential solutions. As I will be writing it in python, where I have the most experience, I would prefer if the answers were more python based, but general ideas are great too.

    Read the article

  • ReplaceAll not working as expected

    - by Tim Kemp
    Still early days with Mathematica so please forgive what is probably a very obvious question. I am trying to generate some parametric plots. I have: ParametricPlot[{ (a + b) Cos[t] - h Cos[(a + b)/b t], (a + b) Sin[t] - h Sin[(a + b)/b t]}, {t, 0, 2 \[Pi]}, PlotRange -> All] /. {a -> 2, b -> 1, h -> 1} No joy: the replacement rules are not applied and a, b and h remain undefined. If I instead do: Hold@ParametricPlot[{ (a + b) Cos[t] - h Cos[(a + b)/b t], (a + b) Sin[t] - h Sin[(a + b)/b t]}, {t, 0, 2 \[Pi]}, PlotRange -> All] /. {a -> 2, b -> 1, h -> 1} it looks like the rules ARE working, as confirmed by the output: Hold[ParametricPlot[{(2 + 1) Cos[t] - 1 Cos[(2 + 1) t], (2 + 1) Sin[t] - 1 Sin[(2 + 1) t]}, {t, 0, 2 \[Pi]}, PlotRange -> All]] Which is what I'd expect. Take the Hold off, though, and the ParametricPlot doesn't work. There's nothing wrong with the equations or the ParametricPlot itself, though, because I tried setting values for a, b and h in a separate expression (a=2; b=1; h=1) and I get my pretty double cardoid out as expected. So, what am I doing wrong with ReplaceAll and why are the transformation rules not working? This is another fundamentally important aspect of MMA that my OOP-ruined brain isn't understanding. I tried reading up on ReplaceAll and ParametricPlot and the closest clue I found was that "ParametricPlot has attribute HoldAll and evaluates f only after assigning specific numerical values to variables" which didn't help much or I wouldn't be here. Thanks.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >