Search Results

Search found 834 results on 34 pages for 'fragment'.

Page 9/34 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • How to make Unity 3D work with Bumblebee using the Intel chipset

    - by EboMike
    I have a Sony VAIO S laptop with the dreaded Optimus and finally managed to get Bumblebee to work fully on Ubuntu 12.04 so that I can utilize both the hardware acceleration of the Intel chipset as well as the Nvidia one via optirun and/or bumble-app-settings. However, the desktop effects don't work. But they should, I vaguely remember that they worked for a while before I had Bumblebee installed. This is what I get with the support test: :~$ /usr/lib/nux/unity_support_test -p Xlib: extension "NV-GLX" missing on display ":0". OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile OpenGL version string: 1.4 (2.1 Mesa 8.0.2) Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: no GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no First of all, I kind of doubt that the chipset doesn't support VBOs (essentially a standard feature in GL). Neither Xorg.0.log nor Xorg.8.log show any particular errors. As for the Nvidia drivers: In order to get them to work, I had to install the 304.22 drivers (older ones wouldn't work). They clobbered libglx.so, so I reinstated the xserver-xorg-core libglx.so in its original place, moved Nvidia's libglx.so to an nvidia-specific folder and specified that folder in the bumblebee.config. That seems to work and shouldn't cause the problem I see here. For fun, I tried to use the Nvidia chipset for Unity, but that didn't fly either: ~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 640M LE/PCIe/SSE2 OpenGL version string: 4.2.0 NVIDIA 304.22 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no

    Read the article

  • Can I set my Optimus Nvidia card to run Unity3D with bumblebee?

    - by manuhalo
    I'd like to know whether I can run compiz on my Nvidia card to speed things up. It's a Dell XPS15 laptop but I'm mostly using it as a desktop, so battery life is not important. Apparently my Intel integrated card is able to run unity 3D, but my Nvidia GT 420M is not. Here's the output of unity_support_test, both with optirun and without it: manuhalo@Ubuntu-XPS-L501X:~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 420M/PCI/SSE2 OpenGL version string: 4.1.0 NVIDIA 280.13 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no manuhalo@Ubuntu-XPS-L501X:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes Any ideas of why this is happening? Thanks in advance to anyone able to shed some light on this. What I have tried: Installed the v290 drivers from the x-stable PPA. Tried forcing Unity-3D to work by telling Unity to ignore the unity-support-test results i.e. gksudo gedit /etc/environment add the following UNITY_FORCE_START=1 to the end of the file.

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

  • OpenGL Colorspace Conversion

    - by Steven Behnke
    Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting Linux. On other platforms I've been using a MESA extension but it doesn't function on the Nvidia card.

    Read the article

  • Caching query results in django

    - by Marcio Cruz
    I'm trying to find a way to cache the results of a query that won't change with frequency. For example, categories of products from an e-commerce (cellphones, TV, etc). I'm thinking of using the template fragment caching, but in this fragment, I will iterate over a list of these categories. This list is avaliable in any part of the site, so it's in my base.html file. Do I have always to send the list of categories when rendering the templates? Or is there a more dynamic way to do this, making the list always available in the template?

    Read the article

  • Android app crashes when I change the default xml layout file to another

    - by mib1413456
    I am currently just starting to learn android development and have created a basic "Hello world" app that uses "activity_main.xml" for the default layout. I tried to create a new layout xml file called "new_layout.xml" with a text view, a text field and a button and did the following changes in the MainActivity.java file: setContentView(R.layout.new_layout); I did nothing else expect for adding a new_layout.xml in the res/layout folder, I have tried restarting and cleaning the project but nothing. Below is my activity_main.xml file, new_layout.xml file and MainActivity.java activity_main.xml: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="org.example.androidsdk.demo.MainActivity" tools:ignore="MergeRootFrame" /> new_layout.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal" > <TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="TextView" /> <EditText android:id="@+id/editText1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:ems="10" > <requestFocus /> </EditText> <Button android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button" /> MainActivity.java file package org.example.androidsdk.demo; import android.app.Activity; import android.app.ActionBar; import android.app.Fragment; import android.os.Bundle; import android.view.LayoutInflater; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.ViewGroup; import android.os.Build; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.new_layout); if (savedInstanceState == null) { getFragmentManager().beginTransaction() .add(R.id.container, new PlaceholderFragment()) .commit(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } /** * A placeholder fragment containing a simple view. */ public static class PlaceholderFragment extends Fragment { public PlaceholderFragment() { } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_main, container, false); return rootView; } } }

    Read the article

  • agent-based simulation: performance issue: Python vs NetLogo & Repast

    - by max
    I'm replicating a small piece of Sugarscape agent simulation model in Python 3. I found the performance of my code is ~3 times slower than that of NetLogo. Is it likely the problem with my code, or can it be the inherent limitation of Python? Obviously, this is just a fragment of the code, but that's where Python spends two-thirds of the run-time. I hope if I wrote something really inefficient it might show up in this fragment: UP = (0, -1) RIGHT = (1, 0) DOWN = (0, 1) LEFT = (-1, 0) all_directions = [UP, DOWN, RIGHT, LEFT] # point is just a tuple (x, y) def look_around(self): max_sugar_point = self.point max_sugar = self.world.sugar_map[self.point].level min_range = 0 random.shuffle(self.all_directions) for r in range(1, self.vision+1): for d in self.all_directions: p = ((self.point[0] + r * d[0]) % self.world.surface.length, (self.point[1] + r * d[1]) % self.world.surface.height) if self.world.occupied(p): # checks if p is in a lookup table (dict) continue if self.world.sugar_map[p].level > max_sugar: max_sugar = self.world.sugar_map[p].level max_sugar_point = p if max_sugar_point is not self.point: self.move(max_sugar_point) Roughly equivalent code in NetLogo (this fragment does a bit more than the Python function above): ; -- The SugarScape growth and motion procedures. -- to M ; Motion rule (page 25) locals [ps p v d] set ps (patches at-points neighborhood) with [count turtles-here = 0] if (count ps > 0) [ set v psugar-of max-one-of ps [psugar] ; v is max sugar w/in vision set ps ps with [psugar = v] ; ps is legal sites w/ v sugar set d distance min-one-of ps [distance myself] ; d is min dist from me to ps agents set p random-one-of ps with [distance myself = d] ; p is one of the min dist patches if (psugar >= v and includeMyPatch?) [set p patch-here] setxy pxcor-of p pycor-of p ; jump to p set sugar sugar + psugar-of p ; consume its sugar ask p [setpsugar 0] ; .. setting its sugar to 0 ] set sugar sugar - metabolism ; eat sugar (metabolism) set age age + 1 end On my computer, the Python code takes 15.5 sec to run 1000 steps; on the same laptop, the NetLogo simulation running in Java inside the browser finishes 1000 steps in less than 6 sec. EDIT: Just checked Repast, using Java implementation. And it's also about the same as NetLogo at 5.4 sec. Recent comparisons between Java and Python suggest no advantage to Java, so I guess it's just my code that's to blame? EDIT: I understand MASON is supposed to be even faster than Repast, and yet it still runs Java in the end.

    Read the article

  • How to get the request url from HttpServletRequest

    - by Gagan
    Say i make a get request like this: GET http://cotnet.diggstatic.com:6000/js/loader/443/JS_Libraries,jquery|Class|analytics|lightbox|label|jquery-dom|jquery-cookie?q=hello#frag HTTP/1.0 Host: cotnet.diggstatic.com:6000 My servlet takes request like this: HttpServletRequest req; When i debug my server and execute, i get the following: req.getRequestURL().toString() = "http://cotnet.diggstatic.com:6000/js/loader/443/JS_Libraries,jquery%7cClass%7canalytics%7clightbox%7clabel%7cjquery-dom%7cjquery-cookie" req.getRequestURI() = "/js/loader/443/JS_Libraries,jquery%7cClass%7canalytics%7clightbox%7clabel%7cjquery-dom%7cjquery-cookie" req.getQueryString() = "q=hello" How does one get the fragment information ? Also, when i debug the request, i see a uri_ field of type java.net.URI which has the fragment information. This is exactly what i want. How can i get that ?

    Read the article

  • DocumentFragment not appending in IE

    - by bmwbzz
    I have a select list which, when changed, pulls data via ajax and dynamically creates select lists. Then based on the data used to create the select lists (a type), I pull more data via ajax if i don't have it already and create the options for the select list and store them in a fragment. Then I append that fragment to the select list. This is zippy in FF3 and Chrome but either doesn't append the options at all or takes a long time (minutes) to append the options in IE7. Note: I am also using jQuery. code from the success callback which creates the select lists: blockDiv.empty(); var contentItemTypes = new Array(); selectLists = new Array(); for (var post in msg) { if (post != undefined) { var div = fragment.cloneNode(true); //deep copy var nameDiv = $(div.firstChild); nameDiv.text(msg[post].Name); blockDiv[0].appendChild(div); var allSelectLists = blockDiv.find('.editor-field select'); var selectList = $(allSelectLists[allSelectLists.length - 1]); var blockId = msg[post].ID; var elId = 'PageContentItem.' + blockId; selectList.attr('id', elId); selectList.attr('name', elId); var contentItemTypeId = msg[post].ContentItemTypeId; selectList.attr('cit', contentItemTypeId); if (contentItems[contentItemTypeId] != null || contentItems[contentItemTypeId] != undefined) { contentItems[contentItemTypeId] = null; } selectLists[post] = selectList; } } var firstContentTypeId = selectLists[0].attr('cit'); getContentItems(firstContentTypeId, setContentItemsForList, 0); code to get the items for the options in the select lists. function getContentItems(contentTypeId, callback, callbackParam) { if (contentItems[contentTypeId] != null || contentItems[contentTypeId] != undefined) { callback(contentTypeId, callbackParam); return; } contentItems[contentTypeId] = document.createDocumentFragment(); Q.ajax({ type: "POST", url: "/CMS/ContentItem/ListByContentType/" + contentTypeId, data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", error: function(xhr, msg, e) { var err = eval("(" + xhr.responseText + ")"); alert(err.ExceptionType + " ***** " + err.Message + " ***** " + err.StackTrace); }, success: function(msg) { var li; for (var post in msg) { if (post != undefined) { li = $('<option value="' + msg[post].ID + '">' + msg[post].Description + '</option>'); contentItems[contentTypeId].appendChild(li[0]); } } callback(contentTypeId, callbackParam); } }); } function setContentItemsForList(contentTypeId, selectIndex) { if (selectIndex < selectLists.length) { var items = contentItems[contentTypeId].cloneNode(true); selectLists[selectIndex].append($(items.childNodes)); selectIndex++; if (selectIndex < selectLists.length) { var nextContentTypeId = selectLists[selectIndex].attr('cit'); getContentItems(nextContentTypeId, setContentItemsForList, selectIndex); } } }

    Read the article

  • Scrolling RelativeLayout- white border over part of the content

    - by Tanis.7x
    I have a fairly simply Fragment that adds a handful of colored ImageViews to a RelativeLayout. There are more images than can fit on screen, so I implemented some custom scrolling. However, When I scroll around, I see that there is an approximately 90dp white border overlapping part of the content right where the edges of the screen are before I scroll. It is obvious that the ImageViews are still being created and drawn properly, but they are being covered up. How do I get rid of this? I have tried: Changing both the RelativeLayout and FrameLayout to WRAP_CONTENT, FILL_PARENT, MATCH_PARENT, and a few combinations of those. Setting the padding and margins of both layouts to 0dp. Example: Fragment: public class MyFrag extends Fragment implements OnTouchListener { int currentX; int currentY; RelativeLayout container; final int[] colors = {Color.BLACK, Color.RED, Color.BLUE}; @Override public View onCreateView(LayoutInflater inflater, ViewGroup fragContainer, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_myfrag, null); } @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); container = (RelativeLayout) getView().findViewById(R.id.container); container.setOnTouchListener(this); // Temp- Add a bunch of images to test scrolling for(int i=0; i<1500; i+=100) { for (int j=0; j<1500; j+=100) { int color = colors[(i+j)%3]; ImageView image = new ImageView(getActivity()); image.setScaleType(ImageView.ScaleType.CENTER); image.setBackgroundColor(color); LayoutParams lp = new RelativeLayout.LayoutParams(100, 100); lp.setMargins(i, j, 0, 0); image.setLayoutParams(lp); container.addView(image); } } } @Override public boolean onTouch(View v, MotionEvent event) { switch (event.getAction()) { case MotionEvent.ACTION_DOWN: { currentX = (int) event.getRawX(); currentY = (int) event.getRawY(); break; } case MotionEvent.ACTION_MOVE: { int x2 = (int) event.getRawX(); int y2 = (int) event.getRawY(); container.scrollBy(currentX - x2 , currentY - y2); currentX = x2; currentY = y2; break; } case MotionEvent.ACTION_UP: { break; } } return true; } } XML: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="fill_parent" android:layout_height="fill_parent" tools:context=".FloorPlanFrag"> <RelativeLayout android:id="@+id/container" android:layout_width="fill_parent" android:layout_height="fill_parent" /> </FrameLayout>

    Read the article

  • rails expiring cache

    - by ash34
    Hi, I entered some products data into a table using a migration. I need to expire the page and fragment cache when I update, add, delete products from this table. I created a sweeper for this. class ProductSweeper < ActionController::Caching::Sweeper observe Product def after_create expire_cache end def after_save expire_cache end def after_update expire_cache end def after_destroy expire_cache end private def expire_cache expire_page(:controller => 'ProductsController', :action => 'index') expire_fragment 'listed_products' end end Then in script/console I update the product name and saved. When I reload my app in the browser it still gives me a cache hit. Cached fragment hit: views/listed_products (0.2ms) Can someone tell me how to expire this cache. I will not be adding, updating, deleting products through a controller action. thanks, ash

    Read the article

  • lexer skips a token

    - by Eugene Strizhok
    I am trying to do basic ANTLR-based scanning. I have a problem with a lexer not matching wanted tokens. lexer grammar DefaultLexer; ALPHANUM : (LETTER | DIGIT)+; ACRONYM : LETTER '.' (LETTER '.')+; HOST : ALPHANUM (('.' | '-') ALPHANUM)+; fragment LETTER : UNICODE_CLASS_LL | UNICODE_CLASS_LM | UNICODE_CLASS_LO | UNICODE_CLASS_LT | UNICODE_CLASS_LU; fragment DIGIT : UNICODE_CLASS_ND | UNICODE_CLASS_NL; For the grammar above, hello. world string given as an input results in world only. Whereas I would expect to get both hello and world. What am I missing? Thanks.

    Read the article

  • Easy framework for OpenGL Shaders in C/C++

    - by Nils
    I just wanted to try out some shaders on a flat image. Turns out that writing a C program, which just takes a picture as a texture and applies, let's say a gaussian blur, as a fragment shader on it is not that easy: You have to initialize OpenGL which are like 100 lines of code, then understanding the GLBuffers, etc.. Also to communicate with the windowing system one has to use GLUT which is another framework.. Turns out that Nvidia's Fx composer is nice to play with shaders.. But I still would like to have a simple C or C++ program which just applies a given fragment shader to an image and displays the result. Does anybody have an example or is there a framework?

    Read the article

  • How to load images and fragments dynamically in LiveCycle Designer forms?

    - by John
    Hi there. I've created a couple of shared templates (.xdp) which will be shared among several clients. Obviously, each client has their own logo and I'd like to set the logo upon form generation. I've managed to change the logo dynamically although I'm not sure if my approach is good. In the xml datasource I've got this element: <ClientID>SomeNumber</ClientId> In the form itself I set the image href with this javascript code: SomeHiddenTextField::calculate HeaderLogo.value.image.href = $record.ClientID + "_logo.jpg"; I've got the logos stored on the server in the same folder as the shared templates. Is this an alright approach to load logos dynamically? I've been trying to achieve the same dynamic behaviour with each client's footer fragment, but I have been unable to figure out how to load these on demand. I could make each footer fragment in to an image but I'd like to avoid it if possible.

    Read the article

  • OAuth 2.0: Can a user-agent client avoid forwarding fragments?

    - by Bosh
    In the OAuth 2.0 draft specification, user-agent clients receive authorization in the form of a bearer token via redirection (from an authentication server) to a URL such as HTTP/1.1 302 Found Location: http://example.com/rd#access_token=FJQbwq9&expires_in=3600 According to Section 3.5.2 it is then the user-agent's job to GET the URL in question, but "The user-agent SHALL NOT include the fragment component with the request." In other words, as a result of the example redirection above, the user-agent should GET /rd HTTP/1.1 Host: example.com without passing #access_token to the server. My question: what user agents behave this way? I thought redirection in Firefox, for example, would (logically) include the fragment in the GET request. Am I just wrong about this, or does the OAuth 2.0 specification rely on non-standard user-agent behavior?

    Read the article

  • Removing DOM event handlers in long-running browser session

    - by Chris Beck
    I have a browser interface with a ul#contacts list on the left and div#contact property panel (email, phone) on the right. Click a contact in the list and my app makes an XHR request to get the contact property HTML fragment and update div#contact.innerHTML. Each contact fragment has an "Edit Contact" link. With JS, I progressively upgrade that link with an event listener that performs an XHR request to replace the static property panel with an in-place edit form. This can happen many times during a single browser session. How should I clean up my "Edit Contact" event listener? Do I need to remove it manually before the form overwrites the property panel? Or is the event listener cleaned up automatically when the contents of div#contact (and the node that I'm listening on) is overwritten? FWIW, I still consider IE6 to be part of my target market.

    Read the article

  • Distance between numpy arrays, columnwise

    - by Jaapsneep
    I have 2 arrays in 2D, where the column vectors are feature vectors. One array is of size F x A, the other of F x B, where A << B. As an example, for A = 2 and F = 3 (B can be anything): arr1 = np.array( [[1, 4], [2, 5], [3, 6]] ) arr2 = np.array( [[1, 4, 7, 10, ..], [2, 5, 8, 11, ..], [3, 6, 9, 12, ..]] ) I want to calculate the distance between arr1 and a fragment of arr2 that is of equal size (in this case, 3x2), for each possible fragment of arr2. The column vectors are independent of each other, so I believe I should calculate the distance between each column vector in arr1 and a collection of column vectors ranging from i to i + A from arr2 and take the sum of these distances (not sure though). Does numpy offer an efficient way of doing this, or will I have to take slices from the second array and, using another loop, calculate the distance between each column vector in arr1 and the corresponding column vector in the slice?

    Read the article

  • hadoop mapper static initialisation

    - by rakeshr
    Hi, I have a code fragment in which I am using a static code block to initialize a variable. public static class JoinMap extends Mapper<IntWritable, MbrWritable, LongWritable, IntWritable> { ....... public static RTree rt = null; static { String rtreeFileName = "R.rtree"; rt = new RTree(rtreeFileName); } public void map(IntWritable key, MbrWritable mbr,Context context) throws IOException, InterruptedException { ......... List elements = rt.overlaps(mbr.getRect()); ....... } } My problem is that the variable rt in the above code fragment is not getting initialised. Can anybody suggest a fix or an alternate way to initialise the variable. I don't want to initialise it inside my map function since that slows down the entire process.

    Read the article

  • Random select is not always returning a single row.

    - by Lieven
    The intention of following (simplified) code fragment is to return one random row. Unfortunatly, when we run this fragment in the query analyzer, it returns between zero and three results. As our input table consists of exactly 5 rows with unique ID's and as we perform a select on this table where ID equals a random number, we are stumped that there would ever be more than one row returned. Note: among other things, we already tried casting the checksum result to an integer with no avail. DECLARE @Table TABLE ( ID INTEGER IDENTITY (1, 1) , FK1 INTEGER ) INSERT INTO @Table SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 SELECT * FROM @Table WHERE ID = ABS(CHECKSUM(NEWID())) % 5 + 1

    Read the article

  • rails - caches_action expire_action

    - by mark
    Hi I want to expire a cached action and wondered how to generate the correct reference. #controller caches_action :index, :layout => false #generates this fragment which works fine views/0.0.0.0:3000/article/someid/posts #sweeper ... expire_action article_posts_path(:article_id => post.article) # results in this Expired fragment: views//en/article/someid/posts (0.0ms) So this is almost ok, except the host is missing. What do I do that supplies this to the expire_action method? Thanks in advance.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >