Search Results

Search found 15780 results on 632 pages for 'pass board elections 2011'.

Page 96/632 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • PASS Precon Countdown… See some of you Monday, and others on Tuesday Night

    - by drsql
    As I finish up the plans for Monday’s database design precon, I am getting pretty excited for the day. This is the third time I have done this precon, and where the base slides are very similar, I have a few new twists in mind. One of my big ideas for my Database Design Workshop precon has always been to give people to do some design. So I am even now trying to go through and whittle down the slides and make sure that we have the time for design. If you are attending, be prepared to be a team player....(read more)

    Read the article

  • Summit Old, Summit New, Summit Borrowed...

    - by Rob Farley
    PASS Summit is coming up, and I thought I’d post a few things. Summit Old... At the PASS Summit, you will get the chance to hear presentations by the SQL Server establishment. Just about every big name in the SQL Server world is a regular at the PASS Summit, so you will get to hear and meet people like Kalen Delaney (@sqlqueen) (who just recently got awarded MVP status for the 20th year running), and from all around the world such as the UK’s Chris Webb (@technitrain) or Pinal Dave (@pinaldave) from India. Almost all the household names in SQL Server will be there, including a large contingent from Microsoft. The PASS Summit is by far the best place to meet the legends of SQL Server. And they’re not all old. Some are, but most of them are younger than you might think. ...Summit New... The hottest topics are often about the newest technologies (such as SQL Server 2012). But you will almost certainly learn new stuff about older versions too. But that’s not what I wanted to pick on for this point. There are many new speakers at every PASS Summit, and content that has not been covered in other places. This year, for example, LobsterPot’s Roger Noble (@roger_noble) is giving a presentation for the first time. He’s a regular around the Australian circuit, but this is his first time presenting to a US audience. New Zealand’s Paul White (@sql_kiwi) is attending his first PASS Summit, and will be giving over four hours of incredibly deep stuff that has never been presented anywhere in the US before (I can’t say the world, because he did present similar material in Adelaide earlier in the year). ...Summit Borrowed... No, I’m not talking about plagiarism – the talks you’ll hear are all their own work. But you will get a lot of stuff you’ll be able to take back and apply at work. The PASS Summit sessions are not full of sales-pitches, telling you about how great things could be if only you’d buy some third-party vendor product. It’s simply not that kind of conference, and PASS doesn’t allow that kind of talk to take place. Instead, you’ll be taught techniques, and be able to download scripts and slides to let you perform that magic back at work when you get home. You will definitely find plenty of ideas to borrow at the PASS Summit. ...Summit Blue Yeah – and there’s karaoke. Blue - Jason - SQL Karaoke - YouTube

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • How can I pass referrer header from my https domain to http domains?

    - by nutcracker
    My website is 100% https. I have links to other http domains. The referrer header is not set when linking from a https page to a http page. From http://en.wikipedia.org/wiki/HTTP_referrer If a website is accessed from a HTTP Secure (HTTPS) connection and a link points to anywhere except another secure location, then the referer field is not sent. I would prefer that other domains can see the referrer so that they know that traffic comes from my domain. Is there a way to force this header or is there another solution? Update I've done some basic testing using a redirect: http page -- link to http --> 301 redirect --> http page = referrer intact https page -- link to https --> 301 redirect --> http page = referrer blank https page -- link to http --> 301 redirect --> http page = referrer blank https page -- link to http --> 302 redirect --> http page = referrer blank The referrer is lost when linking from a https page to a http redirect page on my own domain. So there is no referrer on the redirect.

    Read the article

  • How do I pass an object location into a vertex shader?

    - by Greg Kassapidis
    I am using Blender Game Engine. I want to create a large flat plane, and deform it locally near a moving object. So far (despite being a beginner at shaders) I've written a vertex shader for the plane which moves the vertices to their correct positions (constant positions, for now). I cannot find a way to swap that constant location with an object's location updated every frame, while the shader is running. I am not even sure if it's possible. I only want to access a specific object's center from the shader.

    Read the article

  • How to allow Google Images search to by pass hotlink protection?

    - by Marco Demaio
    I saw Google Images seems to index my images only if hotlink protection is off. * I use anyway hotlink protection because I don't like the idea of people sucking my bandwidth, i simply this code to protcet my sites from being hotlinked: RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com$ [NC] RewriteRule .*\.(jpg|jpeg|png|gif)$ - [F,NC,L] But in order to allow Google Image search to bypass my hotlink protection (I want Google Images search to show my images) would it suffice to add a line like this one: RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com$ [NC] Because I'm wondring: is the crawler crawling just from google.com? and what about google.it / google.co.uk, etc.? FYI: on Google official guidelines I did not find info about this. I suppose hotlink protection prevents Google Images to show images in its results because I did some tests and it seems hotlink protection does prevent my images to be shown in Google Images search.

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • How do I pass vertex and color positions to OpenGL shaders?

    - by smoth190
    I've been trying to get this to work for the past two days, telling myself I wouldn't ask for help. I think you can see where that got me... I thought I'd try my hand at a little OpenGL, because DirectX is complex and depressing. I picked OpenGL 3.x, because even with my OpenGL 4 graphics card, all my friends don't have that, and I like to let them use my programs. There aren't really any great tutorials for OpenGL 3, most are just "type this and this will happen--the end". I'm trying to just draw a simple triangle, and so far, all I have is a blank screen with my clear color (when I set the draw type to GL_POINTS I just get a black dot). I have no idea what the problem is, so I'll just slap down the code: Here is the function that creates the triangle: void CEntityRenderable::CreateBuffers() { m_vertices = new Vertex3D[3]; m_vertexCount = 3; m_vertices[0].x = -1.0f; m_vertices[0].y = -1.0f; m_vertices[0].z = -5.0f; m_vertices[0].r = 1.0f; m_vertices[0].g = 0.0f; m_vertices[0].b = 0.0f; m_vertices[0].a = 1.0f; m_vertices[1].x = 1.0f; m_vertices[1].y = -1.0f; m_vertices[1].z = -5.0f; m_vertices[1].r = 1.0f; m_vertices[1].g = 0.0f; m_vertices[1].b = 0.0f; m_vertices[1].a = 1.0f; m_vertices[2].x = 0.0f; m_vertices[2].y = 1.0f; m_vertices[2].z = -5.0f; m_vertices[2].r = 1.0f; m_vertices[2].g = 0.0f; m_vertices[2].b = 0.0f; m_vertices[2].a = 1.0f; //Create the VAO glGenVertexArrays(1, &m_vaoID); //Bind the VAO glBindVertexArray(m_vaoID); //Create a vertex buffer glGenBuffers(1, &m_vboID); //Bind the buffer glBindBuffer(GL_ARRAY_BUFFER, m_vboID); //Set the buffers data glBufferData(GL_ARRAY_BUFFER, sizeof(m_vertices), m_vertices, GL_STATIC_DRAW); //Set its usage glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), 0); glVertexAttribPointer(1, 4, GL_FLOAT, GL_TRUE, sizeof(Vertex3D), (void*)(3*sizeof(float))); //Enable glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); //Check for errors if(glGetError() != GL_NO_ERROR) { Error("Failed to create VBO: %s", gluErrorString(glGetError())); } //Unbind... glBindVertexArray(0); } The Vertex3D struct is as such... struct Vertex3D { Vertex3D() : x(0), y(0), z(0), r(0), g(0), b(0), a(1) {} float x, y, z; float r, g, b, a; }; And finally the render function: void CEntityRenderable::RenderEntity() { //Render... glBindVertexArray(m_vaoID); //Use our attribs glDrawArrays(GL_POINTS, 0, m_vertexCount); glBindVertexArray(0); //unbind OnRender(); } (And yes, I am binding and unbinding the shader. That is just in a different place) I think my problem is that I haven't fully wrapped my mind around this whole VertexAttribArray thing (the only thing I like better in DirectX was input layouts D:). This is my vertex shader: #version 330 //Matrices uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; //In values layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; //Out values out vec3 frag_color; //Main shader void main(void) { //Position in world gl_Position = vec4(position, 1.0); //gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0); //No color changes frag_color = color; } As you can see, I've disable the matrices, because that just makes debugging this thing so much harder. I tried to debug using glslDevil, but my program just crashes right before the shaders are created... so I gave up with that. This is my first shot at OpenGL since the good old days of LWJGL, but that was when I didn't even know what a shader was. Thanks for your help :)

    Read the article

  • How to pass information across domains to ask for newsletter only once?

    - by Michal Stefanow
    Lets assume following scenario, I have two sites: example1.com example2.com When user visits 1 there is a prompt "please signup to a newsletter". Same thing happens when user visits 2. However when navigating from 1 to 2 I don't want signup form to be shown. My first thought were 3rd-party cookies, but it seems that they are blocked / not working: http://stackoverflow.com/questions/4701922/how-does-facebook-set-cross-domain-cookies-for-iframes-on-canvas-pages?rq=1 http://stackoverflow.com/questions/172223/how-do-i-set-cookies-from-outside-domains-inside-iframes-in-safari?rq=1 Another thought is to append #noshow for each URL but that would require some work - for instance a script that would intercept click / tap events and modify URL structure depending on the address. (but that seems hacky) I wonder if you know a robust well-established solution to this issue? Thanks

    Read the article

  • Storing a pass-by-reference parameter as a pointer - Bad practice?

    - by Karl Nicoll
    I recently came across the following pattern in an API I've been forced to use: class SomeObject { public: // Constructor. SomeObject(bool copy = false); // Set a value. void SetValue(const ComplexType &value); private: bool m_copy; ComplexType *m_pComplexType; ComplexType m_complexType; }; // ------------------------------------------------------------ SomeObject::SomeObject(bool copy) : m_copy(copy) { } // ------------------------------------------------------------ void SomeObject::SetValue(const ComplexType &value) { if (m_copy) m_complexType.assign(value); else m_pComplexType = const_cast<ComplexType *>(&value); } The background behind this pattern is that it is used to hold data prior to it being encoded and sent to a TCP socket. The copy weirdness is designed to make the class SomeObject efficient by only holding a pointer to the object until it needs to be encoded, but also provide the option to copy values if the lifetime of the SomeObject exceeds the lifetime of a ComplexType. However, consider the following: SomeObject SomeFunction() { ComplexType complexTypeInstance(1); // Create an instance of ComplexType. SomeObject encodeHelper; encodeHelper.SetValue(complexTypeInstance); // Okay. return encodeHelper; // Uh oh! complexTypeInstance has been destroyed, and // now encoding will venture into the realm of undefined // behaviour! } I tripped over this because I used the default constructor, and this resulted in messages being encoded as blank (through a fluke of undefined behaviour). It took an absolute age to pinpoint the cause! Anyway, is this a standard pattern for something like this? Are there any advantages to doing it this way vs overloading the SetValue method to accept a pointer that I'm missing? Thanks!

    Read the article

  • Should I pass an object into a constructor, or instantiate in class?

    - by Prisoner
    Consider these two examples: Passing an object to a constructor class ExampleA { private $config; public function __construct($config) { $this->config = $config; } } $config = new Config; $exampleA = new ExampleA($config); Instantiating a class class ExampleB { private $config; public function __construct() { $this->config = new Config; } } $exampleA = new ExampleA(); Which is the correct way to handle adding an object as a property? When should I use one over the other? Does unit testing affect what I should use?

    Read the article

  • How can i optimize this recursive method

    - by Tirdyr
    Hi there. I'm trying to make a word puzzle game, and for that i'm using a recursive method to find all possible words in the given letters. The letters is in a 4x4 board. Like this: ABCD EFGH HIJK LMNO The recursive method is called inside this loop: for (int y = 0; y < width; y++) { for (int x = 0; x < height; x++) { myScabble.Search(letters, y, x, width, height, "", covered, t); } } letters is a 2D array of chars. y & x is ints that shows where in the board width & height is also int, that tells the dimensions of the board "" is the string we are trying to make (the word) covered is an array of bools, to check if we allready used that square. t is a List (wich contains all the words to check against). The recursive method that need optimizing: public void Search(char[,] letters, int y, int x, int width, int height, string build, bool[,] covered, List<aWord> tt) { // Dont get outside the bounds if (y >= width || y < 0 || x >= height || x < 0) { return; } // Dont deal with allrady covered squares if (covered[x, y]) { return; } // Get Letter char letter = letters[x, y]; // Append string pass = build + letter; // check if its a possibel word //List<aWord> t = myWords.aWord.Where(w => w.word.StartsWith(pass)).ToList(); List<aWord> t = tt.Where(w => w.word.StartsWith(pass)).ToList(); // check if the list is emphty if (t.Count < 10 && t.Count != 0) { //stop point } if (t.Count == 0) { return; } // Check if its a complete word. if (t[0].word == pass) { //check if its allrdy present in the _found dictinary if (!_found.ContainsKey(pass)) { //if not add the word to the dictionary _found.Add(pass, true); } } // Check to see if there is more than 1 more that matches string pass // ie. are there more words to find. if (t.Count > 1) { // make a copy of the covered array bool[,] cov = new bool[height, width]; for (int i = 0; i < width; i++) { for (int a = 0; a < height; a++) { cov[a, i] = covered[a, i]; } } // Set the current square as covered. cov[x, y] = true; // Continue in all 8 directions. Search(letters, y + 1, x, width, height, pass, cov, t); Search(letters, y, x + 1, width, height, pass, cov, t); Search(letters, y + 1, x + 1, width, height, pass, cov, t); Search(letters, y - 1, x, width, height, pass, cov, t); Search(letters, y, x - 1, width, height, pass, cov, t); Search(letters, y - 1, x - 1, width, height, pass, cov, t); Search(letters, y - 1, x + 1, width, height, pass, cov, t); Search(letters, y + 1, x - 1, width, height, pass, cov, t); } } The code works as i expected it to do, however it is very slow.. it takes about 2 mins to find the words. EDIT: i clarified that the letters array is 2D

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium x64 Edition. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about 4 months. Windows Updates aside, the system is usually quite stable. Thanks in advance for your help!

    Read the article

  • Apache 2.2, worker mpm, mod_fcgid and PHP: Can't apply process slot

    - by mopoke
    We're having an issue on an apache server where every 15 to 20 minutes it stops serving PHP requests entirely. On occasions it will return a 503 error, other times it will recover enough to serve the page but only after a delay of a minute or more. Static content is still served during that time. In the log file, there's errors reported along the lines of: [Wed Sep 28 10:45:39 2011] [warn] mod_fcgid: can't apply process slot for /xxx/ajaxfolder/ajax_features.php [Wed Sep 28 10:45:41 2011] [warn] mod_fcgid: can't apply process slot for /xxx/statics/poll/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php There is RAM free and, indeed, it seems that more php processes get spawned. /server-status shows lots of threads in the "W" state as well as some FastCGI processes in "Exiting(communication error)" state. I rebuilt mod_fcgid from source as the packaged version was quite old. It's using current stable version (2.3.6) of mod_fcgid. FCGI config: FcgidBusyScanInterval 30 FcgidBusyTimeout 60 FcgidIdleScanInterval 30 FcgidIdleTimeout 45 FcgidIOTimeout 60 FcgidConnectTimeout 20 FcgidMaxProcesses 100 FcgidMaxRequestsPerProcess 500 FcgidOutputBufferSize 1048576 System info: Linux xxx.com 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 UTC 2009 x86_64 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.04 DISTRIB_CODENAME=jaunty DISTRIB_DESCRIPTION="Ubuntu 9.04" Apache info: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:45:55 Server's Module Magic Number: 20051115:21 Server loaded: APR 1.2.12, APR-Util 1.2.12 Compiled using: APR 1.2.12, APR-Util 1.2.12 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Apache modules loaded: alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.load cgi.load deflate.load dir.load env.load expires.load fcgid.load headers.load include.load mime.load negotiation.load rewrite.load setenvif.load ssl.load status.load suexec.load PHP info: PHP 5.2.6-3ubuntu4.6 with Suhosin-Patch 0.9.6.2 (cli) (built: Sep 16 2010 19:51:25) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies

    Read the article

  • Unknown Apache2 + PHP5 FastCGI 500 error .. caused by search engine bots?

    - by rdjurovich
    My Ubuntu server is configured with Apache 2.2.8 and PHP 5.2.4-2ubuntu5.18 in FastCGI mode. Everything works well, except I am seeing 500 errors that only seem to come from bots accessing the server.. for example (access.log): x.125.71.104 - - [16/Nov/2011:10:27:39 +1100] "GET / HTTP/1.1" 500 41377 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" x.40.103.239 - - [16/Nov/2011:11:05:56 +1100] "GET / HTTP/1.0" 500 14717 "-" "Mozilla/5.0 (compatible; mon.itor.us - free monitoring service; http://mon.itor.us)" x.249.67.114 - - [14/Nov/2011:20:57:17 +1100] "GET / HTTP/1.1" 500 101 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" x.55.39.85 - - [14/Nov/2011:19:31:06 +1100] "GET / HTTP/1.1" 500 7032 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)._" It is my understanding that a 500 error will be thrown when the PHP process fails to respond to Apache, which could be caused by a fatal PHP error or if PHP runs out of processes.. so my assumption is that either the bots are hitting the server too hard, killing the PHP processes, or something in the request header from bots is causing a fatal error in my PHP script? If anyone can offer advice on this it would be greatly appreciated! Ryan

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • How can I get Haproxy to not log local requests?

    - by coneybeare
    I am trying to clean out some of the log clutter from my machines and am starting by removing requests that are generated from the server themselves. I have cache warmers running around the clock and I don't want these polluting the logs. I was able to get apache to stop logging local requests by adding a dontlog for the local IP: SetEnvIf Remote_Addr "RE\.DA\.CT\.ED" dontlog CustomLog "|logger -p local3.info -t http" combined env=!dontlog and now I am looking for something similar to put in a configuration for the Haproxy log. How can I prevent 127.0.0.1 requests from writing to the Haproxy log? UPDATE: 2/15/11 I use the excellent loggly service to pull out logs in the cloud, but I am seeing tons of logs like this: 2011 Feb 15 06:09:42.000 ip-10-251-194-96 http: RE.DA.CT.ED - - [15/Feb/2011:06:09:42 -0500] "HEAD /search/Nevad/predictive/txt HTTP/1.0" 200 - "-" "Wget/1.10.2 (Red Hat modified)" 2011 Feb 15 06:09:42.000 127.0.0.1 haproxy[10390]: 127.0.0.1:58408 [15/Feb/2011:06:09:42] www i-5dd7a331.0 0/0/0/8/8 200 210 - - --NI 0/0/0 0/0 "HEAD /search/Nevad/predictive/txt HTTP/1.1" and I want them gone. This question focuses on how to remove that haproxy log line from writing to the server side log in the first place.

    Read the article

  • trouble running multiple domains on tomcat behind apache via mod_jk

    - by mkoryak
    I am having trouble setting up tomcat6 with 2 virtual hosts, behind apache2. if i have just one host defined in tomcat, and one jk worker, everything works fine. as soon as i define another jk worker and a corresponding tomcat host i get this error in jk.log: 9:3075328656] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (69.164.218.75:8009) (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_send_request::jk_ajp_common.c (1507): (dogself) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] ajp_service::jk_ajp_common.c (2447): (dogself) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_service::jk_ajp_common.c (2466): (dogself) connecting to tomcat failed. [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=dogself my tomcat server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="nousophia.com" appBase="webapps-test" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: # workers.properties - ajp13 # # List workers worker.list=dogself,nousophia # Define dogself worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.nousophia.port=8009 worker.nousophia.host=nousophia.com worker.nousophia.type=ajp13 tomcat is started/restarted i followed these directions for setting it up: http://stackoverflow.com/questions/1765399/linking-apache-to-tomcat-with-multiple-domains can someone confirm that it would work as above?

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >