Search Results

Search found 1295 results on 52 pages for 'xyz sad'.

Page 45/52 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • Soon to be PhD in Computer Science - Which Path to Follow?

    - by mttr
    I am going to submit my PhD thesis within the next six months. My PhD is on managing the availabiity of large-scale distributed systems, so I have some experience actually building non-trivial systems (+ I have four years experience working as a programmer). I am now trying to figure out what I should do following the PhD. I enjoy research (a quick definition: identify problem, come up with solution, ask interesting questions, find ways to answer them, build system, experiment, contribute some new knowledge and publish). I also like teaching and supervising students. It would seem that a career in academia is the ideal thing to do (can work on non-trivial problems and contribute something of use to some or more people). However, a career in academia has two significant drawbacks. First, it can be difficult to gain access to real systems with real users which then display real problems. This creates the danger that you do work that seems important (to you and maybe to some of your colleagues), but is not really relevant to anything or anyone. Second, the pay is pretty sad. Apparently, you have to sacrifice this for the privilege of doing research. I enjoy programming, but don't just want to hack some web-based system for the rest of my life. That is, working in IT for a bank is not a future I see myself enjoying. I want to work on interesting problms (that's difficult to define clearly): things where you don't know how to start, that take some time to figure out and attack, that require a rigorous approach to demonstrate that the problem has been solved, and problems that need a solution in the real world. Give the experience of people on stackoverflow, what do you think suitable options are and why (or alternatively, what gaps in my thinking does the above reveal)? Is industrial research (aka IBM Research, Microsoft Research) the only alternative avenue to a career in academia? What other areas, companies, occupations, etc. could provide me with stimulating, inspiring work? Which regions, countries am I most likely to find such work? Please share your experience.

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • IE and Content-disposition inline vs. extension-token

    - by pinkgothic
    Preamble So IE does Mime-Type sniffing. That part's old news. Suggestions of how to combat it tend to be along the lines of 'supply a content-type IE trusts' (i.e. anything that isn't text/plain or application/octet-stream) or 'add extraneous data at the start of the file that is definitely of the type you're serving'. Now, I'm working on an application that has to allow message attachments (like in e-mails), and we want to close up XSS vectors. IE's mime sniffing is one of those vectors - a text/plain file with html content will trigger as html. Recoding isn't an option at this point, changing the attachments the user has provided can only happen if there is absolutely no doubt about the maliciousness of the file - and someone might want to send HTML as text. Now, Microsoft's MSDN article implies the situation might be easier to fix than advertised: If Internet Explorer knows the Content-Type specified and there is no Content-Disposition data, Internet Explorer performs a "MIME sniff," [...] Great! Except I don't have IE nor current means to reliably install it (I realise this is a fairly sad state for a webdeveloper to be in, I hope to fix this soon) and this is grey theory that I can't quite seem to get confirmed one way or the other. Local sources say that line is hogwash - IE will mime sniff anything that is Content-Disposition: inline / <default> and not specific enough for its tastes in -Type. But what about x-* ('extension-token' in the RFC)? Trying to google for how browsers handle Content-Disposition: <extension-token> hasn't yielded anything (though I may just be doing it wrong, my understanding of Google is seriously slipping lately). I found one question that looked promising, but turned out to be a misunderstanding on side of the thread author, meaning that the train of thought was never actually addressed there. Question(s) Does IE really Mime sniff if you expressly pass Content-Disposition: inline? If so: Does anyone here know how browsers handle Content-Disposition: <extension-token>? If they do this in a way that is for my purposes benign, by presuming it to be synonymous with the default (effectively 'inline', though I hear it's not defined anywhere?), is it specific enough for IE not to Mime sniff? Or am I actually shooting myself in the foot by thinking of pursuing this avenue?

    Read the article

  • How to split HTML code with javascript or JQuery

    - by Dean
    Hi I'm making a website using JSP and servlets and I have to now break up a list of radio buttons to insert a textarea and a button. I have got the button and textarea to hide and show when you click on the radio button it shows the text area and button. But this only appears at the top and when there are hundreds on the page this will become awkward so i need a way for it to appear underneath. Here is what my HTML looks like when complied: <form action="addSpotlight" method="POST"> <table> <tr><td><input type="radio" value="29" name="publicationIDs" ></td><td>A System For Dynamic Server Allocation in Application Server Clusters, IEEE International Symposium on Parallel and Distributed Processsing with Applications, 2008</td> </tr> <tr><td><input type="radio" value="30" name="publicationIDs" ></td><td>Analysing BitTorrent's Seeding Strategies, 7th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC-09), 2009</td> </tr> <tr><td><input type="radio" value="31" name="publicationIDs" ></td><td>The Effect of Server Reallocation Time in Dynamic Resource Allocation, UK Performance Engineering Workshop 2009, 2009</td> </tr> <tr><td><input type="radio" value="32" name="publicationIDs" ></td><td>idk, hello, 1992</td> </tr> <tr><td><input type="radio" value="33" name="publicationIDs" ></td><td>sad, safg, 1992</td> </tr> <div class="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </table> </form> Now here is what my JSP looks like: <form action="addSpotlight" method="POST"> <table> <%int i = 0; while(i<ids.size()){%> <tr><td><input type="radio" value="<%=ids.get(i)%>" name="publicationIDs" ></td><td><%=info.get(i)%></td> </tr> <%i++; }%> <div class="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </table> </form> Thanks in Advance Dean

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • Why I can't draw in a loop? (Using UIView in iPhone)

    - by Tattat
    I can draw many things using this : NSString *imagePath = [[NSBundle mainBundle] pathForResource:@"dummy2.png" ofType:nil]; UIImage *img = [UIImage imageWithContentsOfFile:imagePath]; image = CGImageRetain(img.CGImage); CGRect imageRect; double x = 0; double y = 0; for (int k=0; k<someValue; k++) { x += k; y += k; imageRect.origin = CGPointMake(x, y); imageRect.size = CGSizeMake(25, 25); CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, image); } } CGImageRelease(img.CGImage); So, it works, so, I put it into a command object's execute method. Then, I want to do similar thing, but this time, my execute method only do this: NSString *imagePath = [[NSBundle mainBundle] pathForResource:@"dummy2.png" ofType:nil]; UIImage *img = [UIImage imageWithContentsOfFile:imagePath]; image = CGImageRetain(img.CGImage); CGRect imageRect; double x = inComingX; double y = inComingY; imageRect.origin = CGPointMake(x, y); imageRect.size = CGSizeMake(25, 25); CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, image); CGImageRelease(img.CGImage); This time, this is also a Command, and it is the execute method. But I take the for loop away. I will have another method that pass the inComingX , and inComingY into my Command object. My Drawing method is simply execute the Cmd that passed in my drawingEngine: -(void)drawInContext:(CGContextRef)context { [self.cmdToBeExecuted execute]; } I also have the assign method to assign the command,: -(void)assignCmd:(Command* )cmd{ self.cmdToBeExecuted = cmd; } And this is the way I called the drawingEngine for(int k=0; k<5; k++){ [self.drawingEngine assignCmd:[DrawingCmd setDrawingInformation:(10*k):0:@"dummy.png"]]; [self.drawingEngine setNeedsDisplay]; } It can draw, but the sad thing is it only draw the last one. Why? and how to fix it? I can draw all the things in my First code, but after I take the loop outside, and use the loop in last code, it just only draw the last one. Plz help

    Read the article

  • How to resize an openGL window created with wglCreateContext?

    - by Nick
    Is it possible to resize an openGL window (or device context) created with wglCreateContext without disabling it? If so how? Right now I have a function which resizes the DC but the only way I could get it to work was to call DisableOpenGL and then re-enable. This causes any textures and other state changes to be lost. I would like to do this without the disable so that I do not have to go through the tedious task of recreating the openGL DC state. HWND hWnd; HDC hDC; void View_setSizeWin32(int width, int height) { // resize the window LPRECT rec = malloc(sizeof(RECT)); GetWindowRect(hWnd, rec); SetWindowPos( hWnd, HWND_TOP, rec->left, rec->top, rec->left+width, rec->left+height, SWP_NOMOVE ); free(rec); // sad panda DisableOpenGL( hWnd, hDC, hRC ); EnableOpenGL( hWnd, &hDC, &hRC ); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-(width/2), width/2, -(height/2), height/2, -1.0, 1.0); // have fun recreating the openGL state.... } void EnableOpenGL(HWND hWnd, HDC * hDC, HGLRC * hRC) { PIXELFORMATDESCRIPTOR pfd; int format; // get the device context (DC) *hDC = GetDC( hWnd ); // set the pixel format for the DC ZeroMemory( &pfd, sizeof( pfd ) ); pfd.nSize = sizeof( pfd ); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 24; pfd.cDepthBits = 16; pfd.iLayerType = PFD_MAIN_PLANE; format = ChoosePixelFormat( *hDC, &pfd ); SetPixelFormat( *hDC, format, &pfd ); // create and enable the render context (RC) *hRC = wglCreateContext( *hDC ); wglMakeCurrent( *hDC, *hRC ); } void DisableOpenGL(HWND hWnd, HDC hDC, HGLRC hRC) { wglMakeCurrent( NULL, NULL ); wglDeleteContext( hRC ); ReleaseDC( hWnd, hDC ); }

    Read the article

  • Hue, saturation, brightness, contrast effect in hlsl

    - by Vibhore Tanwer
    I am new to pixel shader, and I am trying to write a simple brightness, contrast, hue, saturation effect. I have written a shader for it but I doubt that my shader is not providing me correct result, Brightness, contrast, saturation is working fine, problem is with hue. if I apply hue between -1 to 1, it seems to be working fine, but to make things more sharp, I need to apply hue value between -180 and 180, like we can apply hue in Paint.NET. Here is my code. // Amount to shift the Hue, range 0 to 6 float Hue; float Brightness; float Contrast; float Saturation; float Alpha; sampler Samp : register(S0); // Converts the rgb value to hsv, where H's range is -1 to 5 float3 rgb_to_hsv(float3 RGB) { float r = RGB.x; float g = RGB.y; float b = RGB.z; float minChannel = min(r, min(g, b)); float maxChannel = max(r, max(g, b)); float h = 0; float s = 0; float v = maxChannel; float delta = maxChannel - minChannel; if (delta != 0) { s = delta / v; if (r == v) h = (g - b) / delta; else if (g == v) h = 2 + (b - r) / delta; else if (b == v) h = 4 + (r - g) / delta; } return float3(h, s, v); } float3 hsv_to_rgb(float3 HSV) { float3 RGB = HSV.z; float h = HSV.x; float s = HSV.y; float v = HSV.z; float i = floor(h); float f = h - i; float p = (1.0 - s); float q = (1.0 - s * f); float t = (1.0 - s * (1 - f)); if (i == 0) { RGB = float3(1, t, p); } else if (i == 1) { RGB = float3(q, 1, p); } else if (i == 2) { RGB = float3(p, 1, t); } else if (i == 3) { RGB = float3(p, q, 1); } else if (i == 4) { RGB = float3(t, p, 1); } else /* i == -1 */ { RGB = float3(1, p, q); } RGB *= v; return RGB; } float4 mainPS(float2 uv : TEXCOORD) : COLOR { float4 col = tex2D(Samp, uv); float3 hsv = rgb_to_hsv(col.xyz); hsv.x += Hue; // Put the hue back to the -1 to 5 range //if (hsv.x > 5) { hsv.x -= 6.0; } hsv = hsv_to_rgb(hsv); float4 newColor = float4(hsv,col.w); float4 colorWithBrightnessAndContrast = newColor; colorWithBrightnessAndContrast.rgb /= colorWithBrightnessAndContrast.a; colorWithBrightnessAndContrast.rgb = colorWithBrightnessAndContrast.rgb + Brightness; colorWithBrightnessAndContrast.rgb = ((colorWithBrightnessAndContrast.rgb - 0.5f) * max(Contrast + 1.0, 0)) + 0.5f; colorWithBrightnessAndContrast.rgb *= colorWithBrightnessAndContrast.a; float greyscale = dot(colorWithBrightnessAndContrast.rgb, float3(0.3, 0.59, 0.11)); colorWithBrightnessAndContrast.rgb = lerp(greyscale, colorWithBrightnessAndContrast.rgb, col.a * (Saturation + 1.0)); return colorWithBrightnessAndContrast; } technique TransformTexture { pass p0 { PixelShader = compile ps_2_0 mainPS(); } } Please If anyone can help me learning what am I doing wrong or any suggestions? Any help will be of great value. EDIT: Images of the effect at hue 180: On the left hand side, the effect I got with @teodron answer. On the right hand side, The effect Paint.NET gives and I'm trying to reproduce.

    Read the article

  • Ambient occlusion shader just shows models as all white

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? If so then how? I'm using C++. Here is my shader: float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

  • iptables DNS resolution

    - by Favolas
    I have a virtual machine with Fedora 19 acting as a router. This machine as an interface (p8p1) with the IP 172.16.1.254 that is connected to another machine (IP 172.16.1.1) that's simulating the external network. I've installed snort 2.9.2.2, applied the snortsam-2.9.2.2.diff.gz patch and installed snortsam 2.70 on the routermachine In snort.conf besides altering some RULE_PATH I believe I've only added the following line to the file. output alert_fwsam: 127.0.0.1:898/password After doing this two comands: ifconfig p8p1 promisc /usr/local/snort/bin/snort -v -i p8p1 If I ping from the external network to the router IP, I can see the info about the pings. One of the rules that I have is icmp-info.rules that as this single line: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP-INFO Echo Reply"; icode:0; itype:0; classtype:misc-activity; sid:408; rev:6;fwsam: src, 5 minutes;) snortsam.conf as this data: defaultkey password accept localhost keyinterval 30 minutes dontblock 192.168.1.1 # rede local rollbackhosts 50 rollbackthreshold 20 / 30 secs rollbacksleeptime 1 minute logfile /var/log/snort/snortsam.log loglevel 3 daemon nothreads # linha importante para gerar os bloqueios via iptables iptables p8p1 LOG bindip 127.0.0.1 Now I run this command: /usr/local/snort/bin/snort -u snort -i p8p1 -c /etc/snort/snort.conf -l /var/log/snort -Dq Terminal gives this message: Spawning daemon child... My daemon child 2080 lives... Daemon parent exiting (0) and when I runsnortsam in terminal i got this: SnortSam, v 2.70. Copyright (c) 2001-2009 Frank Knobbe . All rights reserved. Plugin 'fwsam': v 2.5, by Frank Knobbe Plugin 'fwexec': v 2.7, by Frank Knobbe Plugin 'pix': v 2.9, by Frank Knobbe Plugin 'ciscoacl': v 2.12, by Ali Basel <[email protected]> Plugin 'cisconullroute': v 2.5, by Frank Knobbe Plugin 'cisconullroute2': v 2.2, by Wouter de Jong <[email protected]> Plugin 'netscreen': v 2.10, by Frank Knobbe Plugin 'ipchains': v 2.8, by Hector A. Paterno <[email protected]> Plugin 'iptables': v 2.9, by Fabrizio Tivano <[email protected]>, Luis Marichal <[email protected]> Plugin 'ebtables': v 2.4, by Bruno Scatolin <[email protected]> Plugin 'watchguard': v 2.7, by Thomas Maier <[email protected]> Plugin 'email': v 2.12, by Frank Knobbe Plugin 'email-blocks-only': v 2.12, by Frank Knobbe Plugin 'snmpinterfacedown': v 2.3, by Ali BASEL <[email protected]> Plugin 'forward': v 2.8, by Frank Knobbe Parsing config file /etc/snortsam.conf... Linking plugin 'iptables'... Checking for existing state file "/var/db/snortsam.state". Found. Reading state file. Starting to listen for Snort alerts. and snortsam.log as an entry like this 2013/10/25, 10:15:17, -, 1, snortsam, Starting to listen for Snort alerts. Now, from the external machine I do ping 172.16.1.254 and it starts showing the info and an alert file is created in /var/log/snort/ that as the info about the PINGS. Something like: [**] [1:408:6] ICMP-INFO Echo Reply [**] [Classification: Misc activity] [Priority: 3] 10/25-10:35:16.061319 172.16.1.254 -> 172.16.1.1 ICMP TTL:64 TOS:0x0 ID:38720 IpLen:20 DgmLen:84 Type:0 Code:0 ID:1389 Seq:1 ECHO REPLY Also, if I run instead /usr/local/snort/bin/snort snort -v -i p8p1 i got this message: Running in packet dump mode --== Initializing Snort ==-- Initializing Output Plugins! Snort BPF option: snort pcap DAQ configured to passive. The DAQ version does not support reload. Acquiring network traffic from "p8p1". ERROR: Can't set DAQ BPF filter to 'snort' (pcap_daq_set_filter: pcap_compile: syntax error)! Fatal Error, Quitting.. So, this are my questions: Shouldn't snortsam block the PING? Is that DAQ error causing the problem? If so, How can I solve it?

    Read the article

  • How can we improve overall Programmer Education & Training?

    - by crosenblum
    Last week, I was just viewing this amazing interview by Kevin Rose of Phillip Rosedale, of Second Life. And they had an amazing discussion about how to find, hire and identify good programmer's, and how hard it is to find good ones. Which has lead me to really think about the way we programmer's learn, are taught. For a majority of us, myself included, we are self-taught. Which is great about being a programmer, anyone can learn and develop skills. But this also means, that there is no real standards of what a good programmer is/are, and what kind of environment's encourage the growth of programming skills. This isn't so much a question, but just a desire in me, to see how we can change the culture of programming, and the manager's of programming, so that education and self-improvement is encouraged. There are a lot of avenue's for continued education, youtube videos, books, conferences, but because of the experiental nature of what we do, it isn't always clear what's important to learn and to master. Let's look at the The Joel 12 Steps. The Joel Test Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing? I think all of these have important value, but because of something I call the Experiential Gap, if a programmer or manager has never experienced any of the negative consequences for not having done items on the list, they will never see the need to do any of them. The Experiental Gap, is my basic theory, that each of us has different jobs and different experiences. So for some of us, that have always worked with dozens of programmer's, source control is a must have. But for people who have always been the only programmer, they can not imagine the need for source control. And it's because of this major flaw in how we learn, that we evaluate people by what best practices they do or not do, and the reason for either can start a flame war. We always evaluate people in our field by what they do, and think "Oh if this guy/gal isn't doing xyz best practice, he/she can't be a good programmer, so let's not waste time or energy talking to them." This is exactly why we have so many programming flame wars, that it becomes, because of the Experiental Gap, we can't imagine people not having made the decisions that we have had to made. So this has lead me to think, that we totally need to rethink how we train, educate and manage programmer's. For example, what percentage of you have had encouragement by your manager's to go to conferences, and even have them pay for it? For me, and a lot of people, this is extremely rare, a lot of us would love to go to conferences, to learn more, but the money ain't there to do that. So the point of this question is really to spark a lot of how can we train, learn and manage better? How can we create a new culture of learning that doesn't insult people for not having the same job experiences. Yes we all have jobs and work to do, but our ability to do our jobs well, depends on our desire, interest and support in improving our mastery of our skills. Right now, I see our culture being rather disorganized, we support the elite, but those tons of us that want to get better, just don't have enough support to learn and improve ourselves. I mean, do we as an industry, want to be perceived as just replaceable cogs? Thank you...

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

  • How can I best manage making open source code releases from my company's confidential research code?

    - by DeveloperDon
    My company (let's call them Acme Technology) has a library of approximately one thousand source files that originally came from its Acme Labs research group, incubated in a development group for a couple years, and has more recently been provided to a handful of customers under non-disclosure. Acme is getting ready to release perhaps 75% of the code to the open source community. The other 25% would be released later, but for now, is either not ready for customer use or contains code related to future innovations they need to keep out of the hands of competitors. The code is presently formatted with #ifdefs that permit the same code base to work with the pre-production platforms that will be available to university researchers and a much wider range of commercial customers once it goes to open source, while at the same time being available for experimentation and prototyping and forward compatibility testing with the future platform. Keeping a single code base is considered essential for the economics (and sanity) of my group who would have a tough time maintaining two copies in parallel. Files in our current base look something like this: > // Copyright 2012 (C) Acme Technology, All Rights Reserved. > // Very large, often varied and restrictive copyright license in English and French, > // sometimes also embedded in make files and shell scripts with varied > // comment styles. > > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > #ifdef UNDER_RESEARCH > holographicVisualization(on); > #endif > } And we would like to convert them to something like: > // GPL Copyright (C) Acme Technology Labs 2012, Some rights reserved. > // Acme appreciates your interest in its technology, please contact [email protected] > // for technical support, and www.acme.com/emergingTech for updates and RSS feed. > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > } Is there a tool, parse library, or popular script that can replace the copyright and strip out not just #ifdefs, but variations like #if defined(UNDER_RESEARCH), etc.? The code is presently in Git and would likely be hosted somewhere that uses Git. Would there be a way to safely link repositories together so we can efficiently reintegrate our improvements with the open source versions? Advice about other pitfalls is welcome.

    Read the article

  • A developer&rsquo;s WBS &ndash; 3 factors of 5

    - by johndoucette
    As a development manager, I have requested work breakdown structures (WBS) many times from the dev leads. Everyone has their own approach and why it takes sometimes days to get this simple list is often frustrating. Here is a simple way to get that elusive WBS done in 30 minutes and have 125 items in your list – well, 126. The WBS is made up of parent-child entities representing the overall outcome of the project. At the bottom of the hierarchical list should be the task item that a developer would perform in support of the branch in the list or WBS. Because I work with different dev leads on every project, I always ask the “what time value would you like to see at the lowest task in order to assign it to a developer and ensure it gets done within the timeframe”. I am particular to a task being 8 hours. Some like 8 to 24 hours. Stay away from tasks defaulting to 1 week. The task becomes way to vague and hard to manage completeness, especially on short budgets. As a developer, your focus is identifying the tasks you to accomplish in order to deliver the product. As a project manager, you will take the developer's WBS and add all the “other stuff” like quality testing, meetings, documentation, transition to maintenance, etc… Start your exercise with the name of the product you are delivering as a result of the project. You should be able to represent what you are building and deploying with one to three words. Example; XYZ Public Website Middleware BizTalk Application The reason you start with that single identifier is to always see the list as the product. It helps during each of the next three passes. Now, choose 5 tasks which in their entirety represent the product you will be delivering and add them to list under the product name you created earlier; Public Website     Security     Sites     Infrastructure     Publishing     Creative Continue this concept of seeing the list as the complete picture and decompose it one more level. You should have 25 items. Public Website     Security         Authentication         Login Control         Administration         DRM         Workflow     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... And one more time for a total of 125 items. The top item makes the list 126. Public Website     Security         Authentication             Install (AD/ADAM/LDAP/SQL)             Configuration             Management             Web App Configuration             Implement Provider         Login Control             Login Form             Login/Logoff             pw change             pw recover/forgot             email verification         Administration             ...         DRM             ...         Workflow             ...     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... The next step is to make sure the task at the bottom of every branch represents the “time value” you planned for the project. You can add more to the WBS and of course if you can’t find 5 items, 4 is fine. If a task can be done in a fraction of the time value you determined for the project, try to roll it up into a larger task. In the task actions (later when the iteration is being planned), decompose the details back to the simple tasks. Now, go estimate!

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • HLSL Shader not working right?

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? if so then how? I'm using C++. Here is my shader. float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

  • Cablemodem (SBG6580) firewall denying some outbound traffic? Why? Not configured [migrated]

    - by lairdb
    I finally got around to turning the syslog on for my cablemodem (Motorola Surfboard SBG6580) and I'm seeing about the expected amount of inbound attackage being blocked... 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 17.172.232.109,5223 --> 66.27.xx.xx,53814 DENY:Firewall interface access request 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,53385 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,59960 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack ...and that's great. (Sad, but great.) But I'm also seeing a HUGE amount of what appears to be denied outbound connectivity: 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request ...and Spot checking suggests that it's all legitimate traffic (Opening connections to CrashPlan, etc.), I have no restrictions configured in the modem; I don't see why it should be blocking anything. Am I misreading the log entry, and it's not actually being denied? (Seems unlikely.) Is the ISP (TWC) pushing deny tables that are not exposed in the UI? (Tinfoil hat too tight.) I'm confused. (The good news, such as it is, is that AFAIK I'm not experiencing any actual issues... but maybe I am; tough to tell.) Thanks.

    Read the article

  • Manual (Dynamic) LINQ subquery using IN clause

    - by immortalali-msn-com
    Hi Everyone, I want to query the DB through LINQ writing manual SQL, my linq method is: var q = db.TableView.Where(sqlAfterWhere); returnValue = q.Count(); this method queries well if the value passed to variable "sqlAfterWhere" is: (this variable is String type) it.Name = 'xyz' but what if i want to use IN clause, using a sub query. (i need to use 'it' before every column name in the above query to work), i cant use 'it' before the sub query columns as its a separate query, so what should i do, if i dont use any thing, and use column names directly it gives error saying " could not be resolved" where is my column names with out 'it' at the begining. So the query not working is: (this is a string passed to the variable above): it.Name IN (SELECT Name FROM TableName WHERE Address LIKE '%SomeAddress%') the errors come out as: Name could not be resolved Address could not be resolved The exact error is: "'Name' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly., near simple identifier, line 6, column 25." Same error for "Address as well if i use 'it.' before these columns it gives error as: "The element type 'Edm.Int32' and the CollectionType 'Transient.collection[Transient.rowtype(GroupID,Edm.Int32(Nullable=True,DefaultValue=))]' are not compatible. The IN expression only supports entity, primitive, and reference types. , near WHERE predicate, line 6, column 14." Thanks for the help

    Read the article

  • Calculating File size before download - Downloading NSURLConnection Slider timing

    - by sagar
    Ok ! Coming to the point directly. What I want to do is explained as follows. I have an url of MP3 file. ( for example Sound File ) Now, When user starts application. Download should start & for that I have implemented following methods. -(void)viewDidLoad { [super viewDidLoad]; NSURL *url=[NSURL URLWithString:@"http://xyz.pqr.com/abc.mp3"]; NSURLRequest *req=[NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageNotAllowed timeoutInterval:120]; NSURLConnection *con=[[NSURLConnection alloc] initWithRequest:req delegate:self startImmediately:YES]; if(con){ myWebData=[[NSMutableData data] retain]; } else { // [MainHandler performSelector:@selector(targetSelector:) withObject:nil]; } } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { NSLog(@"%@",@"connection established"); [myWebData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { NSLog(@"%@",@"connection receiving data"); [myWebData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { NSLog(@"%@",@"connection failed"); [connection release]; // [AlertViewHandler showAlertWithErrorMessage:@"Sorry, there is no network connection. Please check your network and try again."]; // [self parserDidEndDocument:nil]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { [connection release]; } Now, Above methods work perfectly for downloading. But missing points are as follows. I can not get the exact size which is going to be downloaded. ( means I want to know what is the size of file - which is going to be download )

    Read the article

  • Compile time Meta-programming, with string literals.

    - by Hassan Syed
    I'm writing some code which could really do with some simple compile time metaprogramming. It is common practise to use empty-struct tags as compile time symbols. I need to decorate the tags with some run-time config elements. static variables seem the only way to go (to enable meta-programming), however static variables require global declarations. to side step this Scott Myers suggestion (from the third edition of Effective C++), about sequencing the initialization of static variables by declaring them inside a function instead of as class variables, came to mind. So I came up with the following code, my hypothesis is that it will let me have a compile-time symbols with string literals use-able at runtime. I'm not missing anything I hope. template<class Instance> class TheBestThing { public: void set_name(const char * name_in) { get_name() = std::string(name_in); } void set_fs_location(const char * fs_location_in) { get_fs_location() = std::string(fs_location_in); } std::string & get_fs_location() { static std::string fs_location; return fs_location; } std::string & get_name() { static std::string name; return name; } }; struct tag {}; int main() { TheBestThing<tag> x; x.set_name("xyz"); x.set_fs_location("/etc/lala"); ImportantObject<x> SinceSlicedBread; }

    Read the article

  • Problems requesting the LDAP: The server is unwilling to process the request.

    - by Flo
    We have written an authentication provider for a SharePoint web application which can requests multiple LDAP directories. One of the LDAP server have to be requested via SSL. So we imported the CA certificate which was used to sign the LDAP server's certificate into the certificate store of the SharePoint server. The following code snippet shows how we authenticate an user. The passed credentials (account, password) belong to the user we want to authenticate. var entry = new DirectoryEntry("LDAP://<ldap-server-address>", "cn=account,ou=sub,o=xyz,c=de", "password", AuthenticationTypes.SecureSocketsLayer); var searcher = new DirectorySearcher(entry); var found = searcher.FindOne(); When the code is processed, the call to searcher.FindOne() throws following exception. System.Runtime.InteropServices.COMException (0x80072035): The server is unwilling to process the request What circumstance can lead to this error? UPDATE: I found some information about the error message. There the problem seems to be the certificate store, as the user has only stored the certificate in the in the user's store and not in the computer's store. Unfortunately we've already stored it there. So could this be still a certificate issue? UPDATE/SOLUTION: Actually the problem is solved. It seems as if the root CA certificate was imported correctly but the error messages the LDAP server responded was caused by an expired user account our customer gave us for testing.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >