Search Results

Search found 749 results on 30 pages for 'occam pi'.

Page 10/30 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How to edit a StoryBoard to put it in a Button Style ?

    - by pi-corellis
    Hello, I've created a storyboard for a button in blend that I want it to apply everytime the button is pressed, So I tried to create a style,I've been stucked for a long time now. here is the code of my Storyboard: <Storyboard> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.ScaleX)" Storyboard.TargetName="button"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CubicEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.25" Value="0.85"> <EasingDoubleKeyFrame.EasingFunction> <QuinticEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.5" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <QuarticEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.ScaleY)" Storyboard.TargetName="button"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CubicEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.25" Value="0.85"> <EasingDoubleKeyFrame.EasingFunction> <QuinticEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.5" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <QuarticEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> And Here is the code of my button style: <Style x:Key="ParametersButton" TargetType="ButtonBase" > <Setter Property="Background" Value="{StaticResource TransparentBrush}"/> <Setter Property="BorderBrush" Value="{StaticResource PhoneForegroundBrush}"/> <Setter Property="Foreground" Value="{StaticResource PhoneForegroundBrush}"/> <Setter Property="MinHeight" Value="72" /> <Setter Property="BorderThickness" Value="{StaticResource PhoneDefaultBorderThickness}"/> <Setter Property="FontFamily" Value="{StaticResource PhoneFontFamilyNormal}"/> <Setter Property="FontSize" Value="{StaticResource PhoneFontSizeMediumLarge}"/> <Setter Property="Padding" Value="0,15,15,0"/> <Setter Property="Height" Value="72"/> <Setter Property="Width" Value="72"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ButtonBase"> <Grid Background="Transparent" > <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"/> <VisualState x:Name="MouseOver"/> <VisualState x:Name="UnPressed"/> <VisualState x:Name="Pressed"> <Storyboard> <!--Here is where I want to insert my StoryBoard--> </Storyboard> </VisualState> <VisualState x:Name="Disabled"/> </VisualStateGroup> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"/> <VisualState x:Name="Unfocused"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <Border x:Name="ButtonBackground" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="0" CornerRadius="0" Background="#FF1BA1E2" Margin="{StaticResource PhoneTouchTargetOverhang}" RenderTransformOrigin="0.5,0.5"> <ContentControl x:Name="foregroundContainer" FontFamily="{TemplateBinding FontFamily}" Foreground="{TemplateBinding Foreground}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" FontSize="{TemplateBinding FontSize}" Padding="{TemplateBinding Padding}" Content="{TemplateBinding Content}" ContentTemplate="{TemplateBinding ContentTemplate}"/> </Border> </Grid> </ControlTemplate> </Setter.Value> </Setter> How can I proceed ? Thanks, Renaud

    Read the article

  • Could not load SWT library on Windows 32-bit

    - by Firzen
    I am almost done with one Java project that I have been developing on Linux. Now I need to build and test it on Windows. So I have installed Eclipse on Windows XP 32-bit, and imported my project. All dependencies of project are in jar files in lib folder, and on Linux everything works well, but on Windows XP I get following error: Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-pi-gtk-4234 in java.library.path no swt-pi-gtk in java.library.path Can't load library: C:\Documents and Settings\firzen\.swt\lib\win32\x86\swt-pi-gtk-4234.dll Can't load library: C:\Documents and Settings\firzen\.swt\lib\win32\x86\swt-pi-gtk.dll at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.gtk.OS.<clinit>(OS.java:22) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at gui.Frontend.<init>(Frontend.java:51) at Fighter.main(Fighter.java:18) I have searched for these DLLs, but I have failed to find them. Where can I download these DLL files? Thanks in advance.

    Read the article

  • Creating independent process!

    - by Neha
    I am trying to create a process from a service in C++. This new process is creating as a child process. I want to create an independent process and not a child process... I am using CreateProcess function for the same. Since the new process i create is a child process when i try to kill process tree at the service level it is killing the child process too... I dont want this to happen. I want the new process created to run independent of the service. Please advice on the same.. Thanks.. Code STARTUPINFO si; PROCESS_INFORMATION pi; ZeroMemory( &si, sizeof(si) ); si.cb = sizeof(si); // Start the child process. ZeroMemory( &pi, sizeof(pi) ); si.dwFlags = STARTF_USESHOWWINDOW; if(bRunOnWinLogonDesktop) { if(csDesktopName.empty()) si.lpDesktop = _T("winsta0\\default"); else _tcscpy(si.lpDesktop, csDesktopName.c_str()); } if(bHide) si.wShowWindow = SW_HIDE; /* maybe even SW_HIDE */ else si.wShowWindow = SW_SHOW; /* maybe even SW_HIDE */ TCHAR szCmdLine[512]; _tcscpy(szCmdLine, csCmdLine.c_str()); if( !CreateProcess( NULL, szCmdLine, NULL, NULL, FALSE, CREATE_NEW_PROCESS_GROUP, NULL, NULL, &si, &pi ) )

    Read the article

  • PHP: extract email and name from data file

    - by pi-2r
    I need to extract the name and the e-mail from one data file. the file contains more than 500 lines and I want to extract this two informations almost all the data. I would like to use preg_match_all, but my function doesn't work ... $chaine = " ----------------- 11/21/12 16:06:54 tcp static-qvn-qvt-127041 MAIL [email protected] NAME tata1 ----------------- 11/21/12 16:06:54 tcp static-qvn-qvt-127041 MAIL [email protected] NAME tata2 * ----------------- 11/21/12 16:06:54 tcp static-qvn-qvt-127041 MAIL [email protected] NAME tata3 "; //$chaine =" #76:50#89:1#86:50#49:1#84:22"; $motif="/MAIL([a-z]{2,4}+)NAME([a-z]{2,4}+)/"; preg_match_all($motif,$chaine,$out); $nb=count($out[0]); for($i=0;$i<$nb;$i++) { echo $out[0][$i].'<br/>'; }

    Read the article

  • send xml data from j2me app to remote server

    - by pi
    Hi, I have a J2ME application - which generates XML data (from an object) and needs to get it sent across to a remote server/machine. How do I get this xml or the object through to the remote server please? Thanks. Just in case it might help, the receiving application on the remote server is PHP powered. Thanks again.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • PHP and MySQL on IIS7: can't find php_mcrypt.dll in php.ini

    - by user46250
    I have installed PHP with Microsoft Web PI. Then I installed mysql. According to http://learn.iis.net/page.aspx/353/install-and-configure-mysql-for-php-applications-on-iis-7/ I have to Uncomment the following lines by removing the semicolon: extension=php_mysqli.dll extension=php_mbstring.dll extension=php_mcrypt.dll But there is no extension=php_mcrypt.dll in php.ini installed by web PI so should I add it by hand then where ? and where should I check that php_mcrypt.dll exists ? Seems nobody knows, should better ask on Microsoft forum ?

    Read the article

  • Optimizing a 3D World Javascript Animation

    - by johnny
    Hi! I've recently come up with the idea to create a tag cloud like animation shaped like the earth. I've extracted the coastline coordinates from ngdc.noaa.gov and wrote a little script that displayed it in my browser. Now as you can imagine, the whole coastline consists of about 48919 points, which my script would individually render (each coordinate being represented by one span). Obviously no browser is capable of rendering this fluently - but it would be nice if I could render as much as let's say 200 spans (twice as much as now) on my old p4 2.8 Ghz (as a representative benchmark). Are there any javascript optimizations I could use in order to speed up the display of those spans? One 'coordinate': <div id="world_pixels"> <span id="wp_0" style="position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;" onmouseover="magnify_world_pixel('wp_0');" onmouseout="shrink_world_pixel('wp_0');" onClick="set_askcue_bar('', 'new york')">new york</span> </div> The script: $(document).ready(function(){ world_pixels = $("#world_pixels span"); world_pixels.spin(); setInterval("world_pixels.spin()",1500); }); z = new Array(); $.fn.spin = function () { for(i=0; i<this.length; i++) { /*actual screen coordinates: x/y/z --> left/font-size/top 300/13/0 300/6/300 | / |/ 0/13/300 ----|---- 600/13/300 /| / | 300/20/300 300/13/600 */ /*scale font size*/ var resize_x = 1; /*scale width*/ var resize_y = 2.5; /*scale height*/ var resize_z = 2.5; var from_left = 300; var from_top = 20; /*actual math coordinates: 1 -1 | / |/ 1 ----|---- -1 /| / | 1 -1 */ //var get_element = document.getElementById(); //var font_size = parseInt(this.style.fontSize); var font_size = parseInt($(this[i]).css("font-size")); var left = parseInt($(this[i]).css("left")); if (coast_line_array[i][1]) { } else { var top = parseInt($(this[i]).css("top")); z[i] = from_top + (top - (300 * resize_z)) / (300 * resize_z); //global beacause it's used in other functions later on var top_new = from_top + Math.round(Math.cos(coast_line_array[i][2]/90*Math.PI) * (300 * resize_z) + (300 * resize_z)); $(this[i]).css("top", top_new); coast_line_array[i][3] = 1; } var x = resize_x * (font_size - 13) / 7; var y = from_left + (left- (300 * resize_y)) / (300 * resize_y); if (y >= 0) { this[i].phi = Math.acos(x/(Math.sqrt(x^2 + y^2))); } else { this[i].phi = 2*Math.PI - Math.acos(x/(Math.sqrt(x^2 + y^2))); i } this[i].theta = Math.acos(z[i]/Math.sqrt(x^2 + y^2 + z[i]^2)); var font_size_new = resize_x * Math.round(Math.sin(coast_line_array[i][4]/90*Math.PI) * Math.cos(coast_line_array[i][0]/180*Math.PI) * 7 + 13); var left_new = from_left + Math.round(Math.sin(coast_line_array[i][5]/90*Math.PI) * Math.sin(coast_line_array[i][0]/180*Math.PI) * (300 * resize_y) + (300 * resize_y)); //coast_line_array[i][6] = coast_line_array[i][7]+1; if ((coast_line_array[i][0] + 1) > 180) { coast_line_array[i][0] = -180; } else { coast_line_array[i][0] = coast_line_array[i][0] + 0.25; } $(this[i]).css("font-size", font_size_new); $(this[i]).css("left", left_new); } } resize_x = 1; function magnify_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*30+"px" }, { duration: 1000 }); } function shrink_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*6+"px" }, { duration: 1000 }); } I'd appreciate any suggestions to optimize my script, maybe there is even a totally different approach on how to go about this. The whole .js file which stores the array for all the coordinates is available on my page, the file is about 2.9 mb, so you might consider pulling the .zip for local testing: metaroulette.com/files/31218.zip metaroulette.com/files/31218.js P.S. the php I use to create the spans: <?php //$arbitrary_characters = array('a','b','c','ddsfsdfsdf','e','f','g','h','isdfsdffd','j','k','l','mfdgcvbcvbs','n','o','p','q','r','s','t','uasdfsdf','v','w','x','y','z','0','1','2','3','4','5','6','7','8','9',); $arbitrary_characters = array('cat','table','cool','deloitte','askcue','what','more','less','adjective','nice','clinton','mars','jupiter','testversion','beta','hilarious','lolcatz','funny','obama','president','nice','what','misplaced','category','people','religion','global','skyscraper','new york','dubai','helsinki','volcano','iceland','peter','telephone','internet', 'dialer', 'cord', 'movie', 'party', 'chris', 'guitar', 'bentley', 'ford', 'ferrari', 'etc', 'de facto'); for ($i=0; $i<96; $i++) { $arb_digits = rand (0,45); $arbitrary_character = $arbitrary_characters[$arb_digits]; //$arbitrary_character = "."; echo "<span id=\"wp_$i\" style=\"position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;\" onmouseover=\"magnify_world_pixel('wp_$i');\" onmouseout=\"shrink_world_pixel('wp_$i');\" onClick=\"set_askcue_bar('', '$arbitrary_character')\">$arbitrary_character</span>\n"; } ?>

    Read the article

  • Help Understanding Function

    - by Fred F.
    What does the following function perform? public static double CleanAngle(double angle) { while (angle < 0) angle += 2 * System.Math.PI; while (angle > 2 * System.Math.PI) angle -= 2 * System.Math.PI; return angle; } This is how it is used with ATan2. I believe the actually values passed to ATan2 are always positive. static void Main(string[] args) { int q = 1; //'x- and y-coordinates will always be positive values //'therefore, do i need to "clean"? foreach (Point oPoint in new Point[] { new Point(8,20), new Point(-8,20), new Point(8,-20), new Point(-8,-20)}) { Debug.WriteLine(Math.Atan2(oPoint.Y, oPoint.X), "unclean " + q.ToString()); Debug.WriteLine(CleanAngle(Math.Atan2(oPoint.Y, oPoint.X)), "cleaned " + q.ToString()); q++; } //'output //'unclean 1: 1.19028994968253 //'cleaned 1: 1.19028994968253 //'unclean 2: 1.95130270390726 //'cleaned 2: 1.95130270390726 //'unclean 3: -1.19028994968253 //'cleaned 3: 5.09289535749705 //'unclean 4: -1.95130270390726 //'cleaned 4: 4.33188260327232 }

    Read the article

  • Help understanding some OpenGL stuff

    - by shinjuo
    I am working with some code to create a triangle that moves with arrow keys. I want to create a second object that moves independently. This is where I am having trouble, I have created the second actor, but cannot get it to move. There is too much code to post it all so I will just post a little and see if anyone can help at all. ogl_test.cpp #include "platform.h" #include "srt/scheduler.h" #include "model.h" #include "controller.h" #include "model_module.h" #include "graphics_module.h" class blob : public actor { public: blob(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; class enemyOne : public actor { public: enemyOne(float x, float y) : actor(math::vector2f(x, y)) { } void render() { transform(); glBegin(GL_TRIANGLES); glVertex3f(0.25f, 0.0f, -5.0f); glVertex3f(-.5f, 0.25f, -5.0f); glVertex3f(-.5f, -0.25f, -5.0f); glEnd(); end_transform(); } void update(controller& c, float dt) { if (c.left_key) { rho += pi / 9.0f * dt; c.left_key = false; } if (c.right_key) { rho -= pi / 9.0f * dt; c.right_key = false; } if (c.up_key) { v += .1f * dt; c.up_key = false; } if (c.down_key) { v -= .1f * dt; if (v < 0.0) { v = 0.0; } c.down_key = false; } actor::update(c, dt); } }; int APIENTRY WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, char* lpCmdLine, int nCmdShow ) { model m; controller control(m); srt::scheduler scheduler(33); srt::frame* model_frame = new srt::frame(scheduler.timer(), 0, 1, 2); srt::frame* render_frame = new srt::frame(scheduler.timer(), 1, 1, 2); model_frame->add(new model_module(m, control)); render_frame->add(new graphics_module(m)); scheduler.add(model_frame); scheduler.add(render_frame); blob* prime = new blob(0.0f, 0.0f); m.add(prime); m.set_prime(prime); enemyOne* primeTwo = new enemyOne(2.0f, 0.0f); m.add(primeTwo); m.set_prime(primeTwo); scheduler.start(); control.start(); return 0; } model.h #include <vector> #include "vec.h" const double pi = 3.14159265358979323; class controller; using math::vector2f; class actor { public: vector2f P; float theta; float v; float rho; actor(const vector2f& init_location) : P(init_location), rho(0.0), v(0.0), theta(0.0) { } virtual void render() = 0; virtual void update(controller&, float dt) { float v1 = v; float theta1 = theta + rho * dt; vector2f P1 = P + v1 * vector2f(cos(theta1), sin(theta1)); if (P1.x < -4.5f || P1.x > 4.5f) { P1.x = -P1.x; } if (P1.y < -4.5f || P1.y > 4.5f) { P1.y = -P1.y; } v = v1; theta = theta1; P = P1; } protected: void transform() { glPushMatrix(); glTranslatef(P.x, P.y, 0.0f); glRotatef(theta * 180.0f / pi, 0.0f, 0.0f, 1.0f); //Rotate about the z-axis } void end_transform() { glPopMatrix(); } }; class model { private: typedef std::vector<actor*> actor_vector; actor_vector actors; public: actor* _prime; model() { } void add(actor* a) { actors.push_back(a); } void set_prime(actor* a) { _prime = a; } void update(controller& control, float dt) { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->update(control, dt); } } void render() { for (actor_vector::iterator i = actors.begin(); i != actors.end(); ++i) { (*i)->render(); } } };

    Read the article

  • Can I set a property on an object that is only declared on the instance type, when I don't know the

    - by WilberBeast
    Let me explain. I have a List into which I am adding various ASP.NET controls. I then wish to loop through the list and set a CssClass, however not every Control supports the property CssClass. What I would like to do is test if the underlying instance type supports the CssClass property and set it, but I'm not sure how to do the conversion prior to setting the property since I don't know the type of each Control object. I know that I can use typeof or x.GetType(), but I'm not sure how to use these to convert the controls back to the instance type in order to test for and then set the property. Actually I seem to have solved this, so I thought that I would post the code here for others. foreach (Control c in controlList) { PropertyInfo pi = c.GetType().GetProperty("CssClass"); if (pi != null) pi.SetValue(c, "desired_css_class", null); } I hope that this helps someone else as I has taken me hours to research these 2 lines of code. Cheers Steve

    Read the article

  • How do I create an OpenCV image from a PIL image?

    - by scrible
    I want to do some image processing with OpenCV (in Python), but I have to start with a PIL Image object, so I can't use the cvLoadImage() call, since that takes a filename. This recipe (adapted from http://opencv.willowgarage.com/wiki/PythonInterface) does not work because cvSetData complains argument 2 of type 'void *' . Any ideas? from opencv.cv import * from PIL import Image pi = Image.open('foo.png') # PIL image ci = cvCreateImage(pi.size, IPL_DEPTH_8U, 1) # OpenCV image data = pi.tostring() cvSetData(ci, data, len(data)) I think the last argument to the cvSetData is wrong too, but I am not sure what it should be.

    Read the article

  • C# Abort()ing threads on exit for a Form

    - by Gio Borje
    So far I have this code run when the X button is clicked, but I'm not sure if this is the correct way to terminate threads on a form on exit. Type t = this.GetType(); foreach (PropertyInfo pi in t.GetProperties()) { if (pi.GetType() == typeof(Thread)) { MethodInfo mi = pi.GetType().GetMethod("Abort"); mi.Invoke(null, new object[] {}); } } I keep getting this error: "An attempt has been made to free an RCW that is in use. The RCW is in use on the active thread or another thread. Attempting to free an in-use RCW can cause corruption or data loss."

    Read the article

  • C lang. -- Error: Segmentaion fault

    - by user233542
    I don't understand why this would give me a seg fault. Any ideas? this is the function that returns the signal to stop the program: (below is the other function that is called within this) double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } double shoot(double Sol[N],double A,double c) { int i,j; /Initial Conditions/ for (i=0;i for (i=buff+2;i return Sol[i-1]; } buff, l, N are defined using a #deine statement. l = 401, buff = 50, N = 2000 Thanks

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • Unwanted automated creation of new instances of an activity class

    - by Marko
    I have an activity (called Sender) with the most basic UI, only a button that sends a message when clicked. In the onClickListener I only call this method: private void sendSMS(String msg) { PendingIntent pi = PendingIntent.getActivity(this, 0, new Intent(this, Sender.class), 0); PendingIntent pi = PendingIntent.getActivity(this, 0, myIntent, 0); SmsManager sms = SmsManager.getDefault(); sms.sendTextMessage("1477", null, msg, pi, null); } This works ok, the message is sent but every time a message is sent a new instance of Sender is started on top of the other. If I call sendSMS method three times, three new instances are started. I'm quite new to android so I need some help with this, I only want the same Sender to be on all the time

    Read the article

  • How to extract object reference from property access lamda

    - by Jim C
    Here's a follow-up question to http://stackoverflow.com/questions/2820660/get-name-of-property-as-a-string. Given a method Foo (error checking omitted for brevity): // Example usage: Foo(() => SomeClass.SomeProperty) // Example usage: Foo(() => someObject.SomeProperty) void Foo(Expression<Func<T>> propertyLambda) { var me = propertyLambda.Body as MemberExpression; var pi = me.Member as PropertyInfo; bool propertyIsStatic = pi.GetGetMethod().IsStatic; object owner = propertyIsStatic ? me.Member.DeclaringType : ???; ... // Execute property access object value = pi.GetValue(owner, null); } I've got the static property case working but don't know how to get a reference to someObject in the instance property case. Thanks in advance.

    Read the article

  • Makefile: expand dependencies

    - by Danyel
    First off, the title is very generic because there are just tons of ways of how to possibly solve this. However, I'm looking for a clean and neat way. Situation: I have two equal object files foo.o and foo-pi.o, the latter of which is position-independent (compiled with -fPIC). Both depend on foo.h and bar.h. Problem: How do I, without code duplication, declare dependency of all foo*.o to bar.h? Solutions so far: $(shell bash -c 'echo -ne foo{-pi,}.o'}: bar.h $(addsuffix .o, $(addprefix fo, o-pi o)): bar.h The first solution is not portable on systems that don't support bash, the second is a dirty solution since I could not figure out how to use empty strings in addprefix.

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • NetBeans Podcast 69

    - by TinuA
    Podcast Guests: Terrence Barr, Simon Ritter, Jaroslav Tulach (It's an all-Oracle lineup!) Download mp3: 47 Minutes – 39.5 mb Subscribe on iTunes NetBeans Community News with Geertjan and Tinu If you missed the first two Java Virtual Developer Day events in early May, there's still one more LIVE training left on May 28th. Sign up here to participate live in the APAC time zone or watch later ON DEMAND. Video: Get started with Vaadin development using NetBeans IDE NetBeans IDE was at JavaCro 2014 and at Hippo Get-together 2014 Another great lineup is in the works for NetBeans Day at JavaOne 2014. More details coming soon! NetBeans' Facebook page is almost at 40,000 Likes! Help us crack that milestone in the next few weeks! Other great ways to stay updated about NetBeans? Twitter and Google+. 09:28 / Terrence Barr - What to Know about Java Embedded Terrence Barr, a Senior Technologist and Principal Product Manager for Embedded and Mobile technologies at Oracle, discusses new features of the Java SE Embedded and Java ME Embedded platforms, and sheds some light on the differences between them and what they have to offer to developers. Learn more about Java SE Embedded Tutorial: Using Oracle Java SE Embedded Support in NetBeans IDE Learn more about Java ME Embedded Video: NetBeans IDE Support for Java ME 8 Video: Installing and Using Java ME SDK 8.0 Plugins in NetBeans IDE Follow Terrence Barr to keep up with news in the Embedded space: Blog and Twitter 26:02 / Simon Ritter - A Massive Serving of Raspberry Pi Oracle's Raspberry Pi virtual course is back by popular demand! Simon Ritter, the head of Oracle's Java Technology Evangelism team, chats about the second run of the free Java Embedded course (starting May 30th), what participants can expect to learn, NetBeans' support for Java ME development, and other Java trainings coming to a desktop, laptop or user group near you. Sign up for the Oracle MOOC: Develop Java Embedded Applications Using Raspberry Pi Find out when Simon Ritter and the Java Evangelism team are coming to a Java event or JUG in your area--follow them on Twitter: Simon Ritter Angela Caicedo Steven Chin Jim Weaver 36:58 / Jaroslav Tulach - A Perfect Translation Jaroslav Tulach returns to the NetBeans podcast with tales about the Japanese translation of the Practical API Design book, which he contends surpasses all previous translations, including the English edition! Order "Practical API Design" (Japanese Version)  Find out why the Japanese translation is the best edition yet *Have ideas for NetBeans Podcast topics? Send them to ">nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • JavaOne Afterglow by Simon Ritter

    - by JuergenKress
    Last week was the eighteenth JavaOne conference and I thought it would be a good idea to write up my thoughts about how things went. Firstly thanks to Yoshio Terada for the photos, I didn't bother bringing a camera with me so it's good to have some pictures to add to the words. Things kicked off full-throttle on Sunday.  We had the Java Champions and JUG leaders breakfast, which was a great way to meet up with a lot of familiar faces and start talking all things Java.  At midday the show really started with the Strategy and Technical Keynotes.  This was always going to be tougher job than some years because there was no big shiny ball to reveal to the audience.  With the Java EE 7 spec being finalised a few months ago and Java SE 8, Java ME 8 and JDK8 not due until the start of next year there was not going to be any big announcement.  I thought both keynotes worked really well each focusing on the things most important to Java developers: Strategy One of the things that is becoming more and more prominent in many companies marketing is the Internet of Things (IoT).  We've moved from the conventional desktop/laptop environment to much more mobile connected computing with smart phones and tablets.  The next wave of the internet is not just billions of people connected, but 10s or 100s of billions of devices connected to the network, all generating data and providing much more precise control of almost any process you can imagine.  This ties into the ideas of Big Data and Cloud Computing, but implementation is certainly not without its challenges.  As Peter Utzschneider explained it's about three Vs: Volume, Velocity and Value.  All these devices will create huge volumes of data at very high speed; to avoid being overloaded these devices will need some sort of processing capabilities that can filter the useful data from the redundant.  The raw data then needs to be turned into useful information that has value.  To make this happen will require applications on devices, at gateways and on the back-end servers, all very tightly integrated.  This is where Java plays a pivotal role, write once, run everywhere becomes essential, having nine million developers fluent in the language makes it the defacto lingua franca of IoT.  There will be lots more information on how this will become a reality, so watch this space. Technical How do we make the IoT a reality, technically?  Using the game of chess Mark Reinhold, with the help of people like John Ceccarelli, Jasper Potts and Richard Bair, showed what you could do.  Using Java EE on the back end, Java SE and JavaFX on the desktop and Java ME Embedded and JavaFX on devices they showed a complete end-to-end demo. This was really impressive, using 3D features from JavaFX 8 (that's included with JDK8) to make a 3D animated Duke chess board.  Jasper also unveiled the "DukePad" a home made tablet using a Raspberry Pi, touch screen and accelerometer. Although the Raspberry Pi doesn't have earth shattering CPU performance (about the same level as a mid 1990s Pentium), it does have really quite good GPU performance so the GUI works really well.  The plans are all open sourced and available here.  One small, but very significant announcement was that Java SE will now be included with the NOOB and Raspbian Linux distros provided by the Raspberry Pi foundation (these can be found here).  No more hassle having to download and install the JDK after you've flashed your SD card OS image.  The finale was the Raspberry Pi powered chess playing robot.  Really very, very cool.  I talked to Jasper about this and he told me each of the chess pieces had been 3D printed and then he had to use acetone to give them a glossy finish (not sure what his wife thought of him spending hours in the kitchen in a gas mask!)  The way the robot arm worked was very impressive as it did not have any positioning data (like a potentiometer connected to each motor), but relied purely on carefully calibrated timings to get the arm to the right place.  Having done things like this myself in the past I know how easy it is to find a small error gets magnified into very big mistakes. Here's some pictures from the keynote: The "Dukepad" architecture Nice clear perspex case so you can see the innards. The very nice 3D chess set.  Maya's obviously a great tool. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Simon Ritter,Java One,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Sphere Texture Mapping shows visible seams

    - by AvengerDr
    As you can see from the above picture there is a visible seam in the texture mapping. The underlying mesh is a geosphere based on octahedron subdivisions. On that particular latitude, vertices have been duplicated. However there still is a visible seam. Here is how I calculate the UV coordinates: float longitude = (float)Math.Atan2(normal.X, -normal.Z); float latitude = (float)Math.Acos(normal.Y); float u = (float)(longitude / (Math.PI * 2.0) + 0.5); float v = (float)(latitude / Math.PI); Is this a problem in the coordinates or a mipmapping issue?

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >