Search Results

Search found 17407 results on 697 pages for 'static constructor'.

Page 515/697 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • Gettings Terms asscoiated to a Specific list item

    - by Gino Abraham
    I had a fancy requirement where i had to get all tags associated to a document set in a document library. The normal tag could webpart was not working when i add it to the document set home page, so planned a custom webpart. Was checking in net to find a straight forward way to achieve this, but was not lucky enough to get something. Since i didnt get any samples in net, i looked into Microsoft.Sharerpoint.Portal.Webcontrols and found a solution.The socialdataframemanager control in 14Hive/Template/layouts/SocialDataFrame.aspx directed me to the solution. You can get the dll from ISAPI folder. Following Code snippet can get all Terms associated to the List Item given that you have list name and id for the list item. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.Office.Server.SocialData; namespace TagChecker { class Program { static void Main(string[] args) { // Your site url string siteUrl = http://contoso; // List Name string listName = "DocumentLibrary1"; // List Item Id for which you want to get all terms int listItemId = 35; using (SPSite site = new SPSite(siteUrl)) { using(SPWeb web = site.OpenWeb()) { SPListItem listItem = web.Lists[listName].GetItemById(listItemId); string url = string.Empty; // Based on the list type the url would be formed. Code Sniffed from Micosoft dlls :) if (listItem.ParentList.BaseType == SPBaseType.DocumentLibrary) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Url.TrimStart(new char[] { '/' }); } else if (SPFileSystemObjectType.Folder == listItem.FileSystemObjectType) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Folder.Url.TrimStart(new char[] { '/' }); } else { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.ParentList.Forms[PAGETYPE.PAGE_DISPLAYFORM].Url.TrimStart(new char[] { '/' }) + "?ID=" + listItem.ID.ToString(); } SPServiceContext serviceContext = SPServiceContext.GetContext(site); Uri uri = new Uri(url); SocialTagManager mgr = new SocialTagManager(serviceContext); SocialTerm[] terms = mgr.GetTerms(uri); foreach (SocialTerm term in terms) { Console.WriteLine(term.Term.Labels[0].Value ); } } } Console.Read(); } } } Reference dlls added are Microsoft.Sharepoint , Microsoft.Sharepoint.Taxonomy, Microsoft.office.server, Microsoft.Office.Server.UserProfiles from ISAPI folder. This logic can be used to make a custom tag cloud webpart by taking code from OOB tag cloud, so taht you can have you webpart anywhere in the site and still get Tags added to a specifc libdary/List. Hope this helps some one.

    Read the article

  • Drawing a texture at the end of a trace (crosshair?) UDK

    - by Dave Voyles
    I'm trying to draw a crosshair at the end of my trace. If my crosshair does not hit a pawn or static mesh (ex, just a skybox) then the crosshair stays locked on a certain point at that actor - I want to say its origin. Ex: Run across a pawn, then it turns yellow and stays on that pawn. If it runs across the skybox, then it stays at one point on the box. Weird? How can I get my crosshair to stay consistent? I've included two images for reference, to help illustrate. Note: The wrench is actually my crosshair. The "X" is just a debug crosshair. Ignore that. /// Image 1 /// /// Image 2 /// /*************************************************************************** * Draws the crosshair ***************************************************************************/ function bool CheckCrosshairOnFriendly() { local float CrosshairSize; local vector HitLocation, HitNormal, StartTrace, EndTrace, ScreenPos; local actor HitActor; local MyWeapon W; local Pawn MyPawnOwner; /** Sets the PawnOwner */ MyPawnOwner = Pawn(PlayerOwner.ViewTarget); /** Sets the Weapon */ W = MyWeapon(MyPawnOwner.Weapon); /** If we don't have an owner, then get out of the function */ if ( MyPawnOwner == None ) { return false; } /** If we have a weapon... */ if ( W != None) { /** Values for the trace */ StartTrace = W.InstantFireStartTrace(); EndTrace = StartTrace + W.MaxRange() * vector(PlayerOwner.Rotation); HitActor = MyPawnOwner.Trace(HitLocation, HitNormal, EndTrace, StartTrace, true, vect(0,0,0),, TRACEFLAG_Bullet); DrawDebugLine(StartTrace, EndTrace, 100,100,100,); /** Projection for the crosshair to convert 3d coords into 2d */ ScreenPos = Canvas.Project(HitLocation); /** If we haven't hit any actors... */ if ( Pawn(HitActor) == None ) { HitActor = (HitActor == None) ? None : Pawn(HitActor.Base); } } /** If our trace hits a pawn... */ if ((Pawn(HitActor) == None)) { /** Draws the crosshair for no one - Grey*/ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(100,100,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return false; } /** Draws the crosshair for friendlies - Yellow */ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(255,255,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return true; }

    Read the article

  • Styling Windows Phone Silverlight Applications

    - by Tim Murphy
    If you have not developed with styles in Silverlight/XAML then it can be challenging and resources can be sparse depending on how deep you get.  One thing that you need to understand is what level you can apply styles and how much they can cascade.  What I am finding is that this doesn’t go to the level that we are used to in HTML and CSS. While styles can be defined at a page level if you want to share styles throughout your application they should be defined in the App.xaml file.  This is of course analogous to placing a style in your HTML file versus an external CSS file.  This is the type of style I will concentrate on in this post. The first thing to look it how styles associate to elements.  TargetType defines the object type that your style will apply to.  In the example below the style is targeting the TextBlock object type. <Style x:Key="TextBlockSmallGray" TargetType="TextBlock"> Next we use a Setter which allows you to apply values for specific attributes of the target object type.  The setters can be a simple value or complex.  The first example here is simply applying a color to the background property of the target. <Setter Property="Background" Value="White"/> The second setter example here is for the same property, but we are applying a the definition of a LinearGradientBrush. <Setter Property="Background"> <Setter.Value> <LinearGradientBrush> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset="1" Color="White"/> </LinearGradientBrush> </Setter.Value> </Setter> The last thing I want to cover here is that you can leverage the system styles and then override or extend them.  The BasedOn attribute of the Style tag allows this sort of inheritance.  In the example below I am going to start with the PhoneTextTitleStyle and then override properties as needed. <Style x:Key="TextBlockTitle" BasedOn="{StaticResource PhoneTextTitle1Style}" TargetType="TextBlock"> So now that we have our styles defined applying it is fairly straight forward.  Add the style name as a static resource to the style property of the element in your page and off you go. <Grid x:Name="LayoutRoot" Style="{StaticResource PageGridStyle}"> So this is one step in creating consistency in your application’s look.  In future posts I will dig a little deeper. del.icio.us Tags: windows phone 7,mobile development,windows phone 7 development,.NET,software development,design,UX

    Read the article

  • How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?

    - by sathya
    How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?MIXED_DML_OPERATION :-one of the worst issues we have ever faced :)While trying to perform DML operation on a setup object and non-setup object in a single action you will face this error.Following are the solutions I tried and the final one worked out :-1. perform the 1st objects DML on normal apex method. Then Call the 2nd objects DML through a future method.    Drawback :- You cant get a response from the future method as its context is different and because its executing asynchronously and that its static.2. Tried the following option but it didnt work :-    1. perform the dml operation on the normal apex method.    2. tried calling the 2nd dml from trigger thinking that it would be in a different context. But it didnt work.    3. Some suggestions were given in some blogs that we could try System.runas()   Unfortunately that works only for test class.   4. Finally achieved it with response synchronously through the following solution :-    a. Created 2 apex:commandbuttons :-        1. <apex:commandButton value="Save and Send Activation Email" action="{!CreateContact}"  rerender="junkpanel" oncomplete="callSimulateUserSave()">            Note :- Oncomplete will not work if you dont have a rerender attribute. So just try refreshing a junk panel.        2. <apex:commandButton value="SimulateUserSave" id="SimulateUserSave" action="{!SaveUser}"  style="display:none;margin-left:5px;"/>        Have a junk panel as well just for rerendering  :-        <apex:outputPanel id="junkpanel"></apex:outputPanel>    b. Created this javascript function which is called from first button's oncomplete and clicks the second button :-                function callSimulateUserSave()                {                    // Specify the id of the button that needs to be clicked. This id is based on the Apex Component Hierarchy.                    // You will not get this value if you dont have the id attribute in the button which needs to be clicked from javascript                    // If you have any doubt in getting this value. Just hover over the button using Chrome developer tools to get the id.                    // But it will show like theForm:SimulateUserSave but you need to replace the colon with a dot here.                    // Note :- I have given display:none in the style of the second button to make sure that, it is not visible for the user.                    var mybtn=document.getElementById('{!$Component.theForm.SimulateUserSave}');                                    mybtn.click();                }    c. Apex Methods CreateContact and SaveUser are the pagereference methods which contains the code to create contact and user respectively.       After inserting the user inside the second apex method you can just set some public Properties in the page,        for ex:- created userid to get the user details and display in the page to show the acknowledgement to the users that the User is created.

    Read the article

  • Fixing a NoClassDefFoundError

    - by Chris Okyen
    I have some code: package ftc; import java.util.Scanner; public class Fer_To_Cel { public static void main(String[] argv) { // Scanner object to get the temp in degrees Farenheit Scanner keyboard = new Scanner(System.in); boolean isInt = true; // temporarily put as true in case the user enters a valid int the first time int degreesF = 0; // initialy set to 0 do { try { // Input the temperature text. System.out.print("\nPlease enter a temperature (integer number, no fractional part) in degrees Farenheit: "); degreesF = Integer.parseInt(keyboard.next()); // Get user input and Assign the far. temperature variable, which is casted from String to int. } // Let the user know in a user friendly notice that the value entered wasnt an int ( give int value range ) , and then give error log catch(java.lang.Exception e) { System.out.println("Sorry but you entered a non-int value ( needs to be between ( including ) -2,147,483,648 and 2,147,483,647 ).. \n"); e.printStackTrace(); isInt = false; } } while(!isInt); System.out.println(""); // print a new line. final int degreesC = (5*(degreesF-32)/9); // convert the degrees from F to C and store the resulting expression in degreesC // Print out a newline, then print what X degrees F is in Celcius. System.out.println("\n" + degreesF + " degrees Farenheit is " + degreesC + " degrees Celcius"); } } And The following error: C:\Program Files\Java\jdk1.7.0_06\bin>java Fer_To_Cel Exception in thread "main" java.lang.NoClassDefFoundError: Fer_To_Cel (wrong name: ftc/Fer_To_Cel) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:14 at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:480) The code compiled without compile errors, but presented errors during execution. Which leads me to two questions. I know Errors can be termed Compiler, Runtime and Logic Errors, but the NoClassDefFoundError inherits java.lang.LinkageError. Does that make it a Linker error, being niether of the three types of errors I listed, If I am right this is the answer. For someone else who obtains the singular .java file and compiles it, would this be the only way to solve this problem? Or can I (should I ) do/have done something to fix this problem? Basically, based on a basis of programming, is this a fault of me as the writer? Could this be done once on, my half and be distributed and not needed be done again?

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Per-pixel collision detection - why does XNA transform matrix return NaN when adding scaling?

    - by JasperS
    I looked at the TransformCollision sample on MSDN and added the Matrix.CreateTranslation part to a property in my collision detection code but I wanted to add scaling. The code works fine when I leave scaling commented out but when I add it and then do a Matrix.Invert() on the created translation matrix the result is NaN ({NaN,NaN,NaN},{NaN,NaN,NaN},...) Can anyone tell me why this is happening please? Here's the code from the sample: // Build the block's transform Matrix blockTransform = Matrix.CreateTranslation(new Vector3(-blockOrigin, 0.0f)) * // Matrix.CreateScale(block.Scale) * would go here Matrix.CreateRotationZ(blocks[i].Rotation) * Matrix.CreateTranslation(new Vector3(blocks[i].Position, 0.0f)); public static bool IntersectPixels( Matrix transformA, int widthA, int heightA, Color[] dataA, Matrix transformB, int widthB, int heightB, Color[] dataB) { // Calculate a matrix which transforms from A's local space into // world space and then into B's local space Matrix transformAToB = transformA * Matrix.Invert(transformB); // When a point moves in A's local space, it moves in B's local space with a // fixed direction and distance proportional to the movement in A. // This algorithm steps through A one pixel at a time along A's X and Y axes // Calculate the analogous steps in B: Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, transformAToB); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, transformAToB); // Calculate the top left corner of A in B's local space // This variable will be reused to keep track of the start of each row Vector2 yPosInB = Vector2.Transform(Vector2.Zero, transformAToB); // For each row of pixels in A for (int yA = 0; yA < heightA; yA++) { // Start at the beginning of the row Vector2 posInB = yPosInB; // For each pixel in this row for (int xA = 0; xA < widthA; xA++) { // Round to the nearest pixel int xB = (int)Math.Round(posInB.X); int yB = (int)Math.Round(posInB.Y); // If the pixel lies within the bounds of B if (0 <= xB && xB < widthB && 0 <= yB && yB < heightB) { // Get the colors of the overlapping pixels Color colorA = dataA[xA + yA * widthA]; Color colorB = dataB[xB + yB * widthB]; // If both pixels are not completely transparent, if (colorA.A != 0 && colorB.A != 0) { // then an intersection has been found return true; } } // Move to the next pixel in the row posInB += stepX; } // Move to the next row yPosInB += stepY; } // No intersection found return false; }

    Read the article

  • Duplicate ping response when running Ubuntu as virtual machine (VMWare)

    - by Stonerain
    I have the following setup: My router - 192.168.0.1 My host computer (Windows 7) - 192.168.0.3 And Ubuntu is running as virtual machine on the host. VMWare network settings is Bridged mode. I've modified Ubuntu network settings in /etc/netowrk/interfaces, set the following config: iface eth0 inet static address 192.168.0.220 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 Internet works correctly, I can install packages. But it gets weird if I try to ping something I get this: PING belpak.by (193.232.248.80) 56(84) bytes of data. From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=250 time=17.0 ms 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=249 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=248 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=247 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=246 time=17.0 ms (DUP! ) ^CFrom 192.168.0.1 icmp_seq=2 Time to live exceeded --- belpak.by ping statistics --- 2 packets transmitted, 1 received, +4 duplicates, +6 errors, 50% packet loss, ti me 999ms rtt min/avg/max/mdev = 17.023/17.041/17.048/0.117 ms I think even more interesting are the results of pinging the router itself: stonerain@ubuntu:~$ ping 192.168.0.1 -c 1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. From 192.168.0.3: icmp_seq=1 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=6.64 ms --- 192.168.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 6.644/6.644/6.644/0.000 ms But if I set -c 2: ... 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=253 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) From 192.168.0.3: icmp_seq=2 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=2 ttl=254 time=7.87 ms --- 192.168.0.1 ping statistics --- 2 packets transmitted, 2 received, +256 duplicates, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 6.666/10.141/13.556/2.410 ms Pinging host machine on the other hand works absolutely correctly: no DUPs, no errors. What seems to be the problem and how can I fix it? Thank you.

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • Dynamic Bind9 + DHCP

    - by AcidRod75
    i have been working on setup a server for my internal network, so far i have a working isc-dhcp-server that can upgrade a chrooted BIND9 (on the same machine), i need to add some static entries on the DNS, so users can resolve the websites that resides in our DMZ. What i had tryed all ready was to modify the /etc/bind/named.conf.local with this info: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; key DHCP_UPDATER { algorithm HMAC-MD5.SIG-ALG.REG.INT; secret "MySuperSecretHash"; (this is not the real value BTW) }; zone "quality.internal" IN { type master; file "/var/lib/bind/quality.internal.db"; allow-update { key DHCP_UPDATER; }; }; zone "0.10.10.in-addr.arpa" { type master; file "/var/lib/bind/rev.10.10.0.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; }; logging { channel query.log { file "/var/log/named/query.log"; severity debug 3; }; category queries { query.log; }; }; --- EOF ---- then i added this 2 entries: zone "ourserver.internal" IN { type master; file "/var/lib/bind/ourserver.internal.db"; }; zone "0.16.172.in-addr.arpa" { type master; file "/var/lib/bind/rev.172.16.0.in-addr.arpa"; }; ---- EOF ---- So.. i created the files ourserver.internal.db and rev.172.16.0.in-addr.arpa placed them BOTH in /var/lib/bind/ and changed the permisions so the bind user can access them, restated the service... when i do a NSLOOKUP www.ourserver.internal i get: Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find www.ourserver.internal: NXDOMAIN BUT when i do a reverse lookup.... Server: 127.0.0.1 Address: 127.0.0.1#53 5.0.16.172.in-addr.arpa name = www.ourserver.internal I do not understand what's wrong. Some help with this will save me from installing a new DNS server at the DMZ JUST to host internal site names- TY in advance BTW: the server i'm using has Ubuntu Server 11.10 fully patched.

    Read the article

  • C Minishell Command Expansion Printing Gibberish

    - by Optimus_Pwn
    I'm writing a unix minishell in C, and am at the point where I'm adding command expansion. What I mean by this is that I can nest commands in other commands, for example: $> echo hello $(echo world! ... $(echo and stuff)) hello world! ... and stuff I think I have it working mostly, however it isn't marking the end of the expanded string correctly, for example if I do: $> echo a $(echo b $(echo c)) a b c $> echo d $(echo e) d e c See it prints the c, even though I didn't ask it to. Here is my code: msh.c - http://pastebin.com/sd6DZYwB expand.c - http://pastebin.com/uLqvFGPw I have a more code, but there's a lot of it, and these are the parts that I'm having trouble with at the moment. I'll try to tell you the basic way I'm doing this. Main is in msh.c, here it gets a line of input from either the commandline or a shellfile, and then calls processline (char *line, int outFD, int waitFlag), where line is the line we just got, outFD is the file descriptor of the output file, and waitFlag tells us whether or not we should wait if we fork. When we call this from main we do it like this: processline (buffer, 1, 1); In processline, we allocate a new line: char expanded_line[EXPANDEDLEN]; We then call expand, in expand.c: expand(line, expanded_line, EXPANDEDLEN); In expand, we copy the characters literally from line to expanded_line until we find a $(, which then calls: static int expCmdOutput(char *orig, char *new, int *oldl_ind, int *newl_ind) orig is line, and new is expanded line. oldl_ind and newl_ind are the current positions in the line and expanded line, respectively. Then we pipe, and recursively call processline, passing it the nested command(for example, if we had "echo a $(echo b)", we would pass processline "echo b"). This is where I get confused, each time expand is called, is it allocating a new chunk of memory EXPANDEDLEN long? If so, this is bad because I'll run out of stack room really quickly(in the case of a hugely nested commandline input). In expand I insert a null character at the end of the expanded string, so why is it printing past it? If you guys need any more code, or explanations, just ask. Secondly, I put the code in pastebin because there's a ton of it, and in my experience people don't like it when I fill up several pages with code. Thanks.

    Read the article

  • Ray Picking Problems

    - by A Name I Haven't Decided On
    I've read so many answers on here about how to do Ray Picking, that I thought I had the idea of it down. But when I try to implement it in my game, I get garbage. I'm working with LWJGL. Here's the code: public static Ray getPick(int mouseX, int mouseY){ glPushMatrix(); //Setting up the Mouse Clip Vector4f mouseClip = new Vector4f((float)mouseX * 2 / 960f - 1, 1 - (float)mouseY * 2 / 640f ,0 ,1); //Loading Matrices FloatBuffer modMatrix = BufferUtils.createFloatBuffer(16); FloatBuffer projMatrix = BufferUtils.createFloatBuffer(16); glGetFloat(GL_MODELVIEW_MATRIX, modMatrix); glGetFloat(GL_PROJECTION_MATRIX, projMatrix); //Assigning Matrices Matrix4f proj = new Matrix4f(); Matrix4f model = new Matrix4f(); model.load(modMatrix); proj.load(projMatrix); //Multiplying the Projection Matrix by the Model View Matrix Matrix4f tempView = new Matrix4f(); Matrix4f.mul(proj, model, tempView); tempView.invert(); //Getting the Camera Position in World Space. The 4th Column of the Model View Matrix. model.invert(); Point cameraPos = new Point(model.m30, model.m31, model.m32); //Theoretically getting the vector the Picking Ray goes Vector4f rayVector = new Vector4f(); Matrix4f.transform(tempView, mouseClip, rayVector); rayVector.translate((float)-cameraPos.getX(),(float) -cameraPos.getY(),(float) -cameraPos.getZ(), 0f); rayVector.normalise(); glPopMatrix(); //This Basically Spits out a value that changes as the Camera moves. //When the Mouse moves, the values change around 0.001 points from screen edge to edge. System.out.format("Vector: %f %f %f%n", rayVector.x, rayVector.y, rayVector.z); //return new Ray(cameraPos, rayVector); return null; } I don't really know why this isn't working. I was hoping some more experienced eyes might be able to help me out. I can get the camera position like a champ, it's the vector the rays going in that I can't seem to get right. Thanks.

    Read the article

  • MarteEngine Tile Collision

    - by opiop65
    I need to add collision to my tile map using MarteEngine. MarteEngine is built of of slick2D. Here's my tile generation code: Code: public void render(GameContainer gc, StateBasedGame game, Graphics g) throws SlickException { for (int x = 0; x < 16; x++) { for (int y = 0; y < 16; y++) { map[x][y] = AIR; air.draw(x * GameWorld.tilesize, y * GameWorld.tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; grass.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 8; y < 10; y++) { map[x][y] = DIRT; dirt.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 10; y < 16; y++) { map[x][y] = STONE; stone.draw(x * tilesize, y * tilesize); } } super.render(gc, game, g); } And one of my tile classes (they're all the same, the image names are just different): Code: package MarteEngine; import org.newdawn.slick.Image; import org.newdawn.slick.SlickException; import it.randomtower.engine.entity.Entity; public class Grass extends Entity { public static Image grass = null; public Grass(float x, float y) throws SlickException { super(x, y); grass = new Image("res/grass.png"); setHitBox(0, 0, 50, 50); addType(SOLID); } } I tried to do it like this: Code: for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; Grass.grass.draw(x * tilesize, y * tilesize); } } But it gave me a NullPointerException. No idea why, everything looks initialized right? I would be very grateful for some help!

    Read the article

  • Box2D Difference Between WorldCenter and Position

    - by Free Lancer
    So this problem has been brothering for a couple of days now. First off, what is the difference between say Body.getWorldCenter() and Body.getPosition(). I heard that WorldCenter might have to do with the center of gravity or something. Second, When I create a Box2D Body for a sprite the Body is always at the lower left corner. I check it by printing a Rectangle of 1 pixel around the box.getWorldCenter(). From what I understand the Body should be in the center of the Sprite and its bounding box should wrap around the Sprite, correct? Here's an image of what I mean (The Sprite is Red, Body Blue): Here's some code: Body Creator: public static Body createBoxBody( final World pPhysicsWorld, final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pCenterX = pSprite.getX() + pSprite.getWidth() / 2; float pCenterY = pSprite.getY() + pSprite.getHeight() / 2; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; //boxBodyDef.position.x = pCenterX / Constants.PIXEL_METER_RATIO; //boxBodyDef.position.y = pCenterY / Constants.PIXEL_METER_RATIO; boxBodyDef.position.x = pSprite.getX() / Constants.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Constants.PIXEL_METER_RATIO; Vector2 v = new Vector2( boxBodyDef.position.x * Constants.PIXEL_METER_RATIO, boxBodyDef.position.y * Constants.PIXEL_METER_RATIO ); Gdx.app.log("@Physics", "createBoxBody():: Box Position: " + v); // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Constants.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Constants.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = pPhysicsWorld.createBody(boxBodyDef); Gdx.app.log("@Physics", "createBoxBody():: Box Center: " + boxBody.getPosition().mul(Constants.PIXEL_METER_RATIO)); boxBody.createFixture(pFixtureDef); boxBody.setTransform( boxBody.getWorldCenter(), MathUtils.degreesToRadians * pRotation ); boxPoly.dispose(); return boxBody; } Making the Sprite: public Car( Texture texture, float pX, float pY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( mSprite.getWidth() / 6, mSprite.getHeight() / 6 ); mSprite.setPosition( pX, pY ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); FixtureDef carFixtureDef = new FixtureDef(); // Set the Fixture's properties, like friction, using the car's shape carFixtureDef.restitution = 1f; carFixtureDef.friction = 1f; carFixtureDef.density = 1f; // needed to rotate body using applyTorque mBody = Physics.createBoxBody( world, BodyDef.BodyType.DynamicBody, carFixtureDef, mSprite ); }

    Read the article

  • Our own Daily WTF

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2014/08/20/our-own-daily-wtf.aspxIf you're a developer, you've probably heard of the website the DailyWTF. If you haven't, head on over to http://www.thedailywtf.com and read. And laugh. I'll wait. Read it? Good. If you're a bit like me probably you've been wondering how on earth some people ever get hired as a software engineer. Most of the stories there seem to weird to be true: no developer would write software like that right? And then you run into a little nugget of code one of your co-workers wrote. And then you realize: "Hey, it happens everywhere!" Look at this piece of art I found in our codebase recently: public static decimal ToDecimal(this string input) {     System.Globalization.CultureInfo cultureInfo = System.Globalization.CultureInfo.InstalledUICulture;     var numberFormatInfo = (System.Globalization.NumberFormatInfo)cultureInfo.NumberFormat.Clone();     int dotIndex = input.IndexOf(".");     int commaIndex = input.IndexOf(",");     if (dotIndex > commaIndex)         numberFormatInfo.NumberDecimalSeparator = ".";     else if (commaIndex > dotIndex)         numberFormatInfo.NumberDecimalSeparator = ",";     decimal result;     if (decimal.TryParse(input, System.Globalization.NumberStyles.Float, numberFormatInfo, out result))         return result;     else         throw new Exception(string.Format("Invalid input for decimal parsing: {0}. Decimal separator: {1}.", input, numberFormatInfo.NumberDecimalSeparator)); }  Me and a collegue have been looking long and hard at this and what we concluded was the following: Apparently, we don't trust our users to be able to correctly set the culture in Windows. Users aren't able to determine if they should tell Windows to use a decimal point or a comma to display numbers. So what we've done here is make sure that whatever the user enters, we'll translate that into whatever the user WANTS to enter instead of what he actually did. So if you set your locale to US, since you're a US citizen, but you want to enter the number 12.34 in the Dutch style (because, you know, the Dutch are way cooler with numbers) so you enter 12,34 we will understand this and respect your wishes! Of course, if you change your mind and in the next input field you decide to use the decimal dot again, that's fine with us as well. We will do the hard work. Now, I am all for smart software. Software that can handle all sorts of input the user can think of. But this feels a little uhm, I don't know.. wrong.. Or am I too old fashioned?

    Read the article

  • Rending 2D Tile World (With Player In The Middle)

    - by Mick
    What I have at the moment is a series of data structures I'm using, and I would like to render the world onto the screen (just the visible parts). I've actually already done this several times (lots of rewrites), but it's a bit buggy (rounding seems to make the screen jump ever so slightly every x tiles the player walks past). Basically I've been confusing myself heavily on what I feel should be a pretty simple problem... so here I am asking for some help! OK! So I have a 50x50 array holding the tiles of the world. I have the player position as 2 floats, x ([0, 49]) and y ([0, 49]) in that array. I have the application size exactly in pixels (x and y). I have an arbitrary TILE_SIZE static int (based on screen pixels). What I think is heavily confusing me is using a 2d orthogonal projection in opengl which maps (0,0) to the top left of the screen and (SCREEN_SIZE_X, SCREEN_SIZE_Y) to the bottom right of the screen. gl.glMatrixMode(GL.GL_PROJECTION); gl.glLoadIdentity(); glu.gluOrtho2D(0, getActualWidth(), getActualHeight(), 0); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); The map tiles are set so that the (0,0) in the array is the bottom left. And the player has to be in the middle on the screen (SCREEN_SIZE_X/2, SCREEN_SIZE_Y/2). What I've been doing so far is trying to render 1-2 tiles more all around what would be displayed on the screen so that I don't have to worry about figuring out rendering half a tile from the top left, depending where the player is. It seems like such an easy problem but after spending about 40+hours on it rewriting it many times I think I'm at a point where I just can't think clearly anymore... Any help would be appreciated. It would be great if someone can provide some very basic pseudo code on keeping the player in the middle when your projection is mapped to screen coordinates and only rendering basically the tiles that you would be any be see. Thanks!

    Read the article

  • Is IE9 a modern browser?

    - by anirudha
    Is IE9 a modern browser? i show you a post who compare IE as well as they compare chrome. Is IE9 a modern browser? well Are you thing that they make something bettter then thing another that what Firefox and chrome do whenever Microsoft do this. that's point that Microsoft always blast IE because they make version upon version not upon update like Firefox make 3.6.14 after 3.6.13 and chrome give update soon as possible but IE not come soon. they will thing for making 10 instead of giving update on 9. well what they tell us new. they make fool public everytime i believe they make fool public as same as today they maked for version 9 they show developer tool in 9 have three new tabs or pael are this enough. whenever  in chrome and firefox their is many plugin who make development easier IE still have a developer tool who not have enough power like Firebug in Firefox. they show performance but forget luna user [window xp] and their IE never runs on other plateform but chrome and firefox can. no customization in IE whenever chrome and firefox have uncountable plugin and addons. show features now in IE who already implemented in firefox very early. well their is no rule that static goes right every time not sure that IE and Firefox both are right in their language. their is no one predict what thing goes better in future. so well keep a thing in mind never wasted time and also thing to make task easier even you need to use sollution opensource or closesoure inside or outside MS does not matter. well everyone tell you much more then they do even IE and some other. they never tell you this thing not in IE but in another can be found they never tell you use other whenever you need a thing and never can be found in their software. you need to more beware of IE because they make them commorcial not really for public if really then why they stop wxp user to use them as well firefox and chrome never force. because they need a thing that force more then more copy of windows sale. so they thing to add a thing in window 8. the IE9 they thing to make before that they thing to make  this for windows 8. they always force user to purchase this for this. this for this. and this trick sell their software. well outside MS Mozilla and Chrome all behave better with user and their feedback. they respect user their privacy and feedback. if you not believe that how much problem you found in IE and they got solved soon as in chrome and firefox bug kill soon. because IE not opensource we need to  boycutt them secondly their is no customization then they make user task easier even with twitter and facebook.

    Read the article

  • 3D collision physics. Response when hitting wall, floor or roof

    - by GlamCasvaluir
    I am having problem with the most basic physic response when the player collide with static wall, floor or roof. I have a simple 3D maze, true means solid while false means air: bool bMap[100][100][100]; The player is a sphere. I have keys for moving x++, x--, y++, y-- and diagonal at speed 0.1f (0.1 * ftime). The player can also jump. And there is gravity pulling the player down. Relative movement is saved in: relx, rely and relz. One solid cube on the map is exactly 1.0f width, height and depth. The problem I have is to adjust the player position when colliding with solids, I don't want it to bounce or anything like that, just stop. But if moving diagonal left/up and hitting solid up, the player should continue moving left, sliding along the wall. Before moving the player I save the old player position: oxpos = xpos; oypos = ypos; ozpos = zpos; vec3 direction; direction = vec3(relx, rely, relz); xpos += direction.x*ftime; ypos += direction.y*ftime; zpos += direction.z*ftime; gx = floor(xpos+0.25); gy = floor(ypos+0.25); gz = floor(zpos+0.25); if (bMap[gx][gy][gz] == true) { vec3 normal = vec3(0.0, 0.0, 1.0); // <- Problem. vec3 invNormal = vec3(-normal.x, -normal.y, -normal.z) * length(direction * normal); vec3 wallDir = direction - invNormal; xpos = oxpos + wallDir.x; ypos = oypos + wallDir.y; zpos = ozpos + wallDir.z; } The problem with my version is that I do not know how to chose the correct normal for the cube side. I only have the bool array to look at, nothing else. One theory I have is to use old values of gx, gy and gz, but I do not know have to use them to calculate the correct cube side normal.

    Read the article

  • Collision detection doesn't work for automated elements in XNA 4.0

    - by NDraskovic
    I have a really weird problem. I made a 3D simulator of an "assembly line" as a part of a college project. Among other things it needs to be able to detect when a box object passes in front of sensor. I tried to solve this by making a model of a laser and checking if the box collides with it. I had some problems with BoundingSpheres of models meshes so I simply create a BoundingSphere and place it in the same place as the model. I organized them into a list of BoundingSpheres called "spheres" and for each model I create one BoundingSphere. All models except the box are static, so the box object has its own BoundingSphere (not a member of the "spheres" list). I also implemented a picking algorithm that I use to start the movement. This is the code that checks for collision: if (spheres.Count != 0) { for (int i = 1; i < spheres.Count; i++) { if (spheres[i].Intersects(PickingRay) != null && Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton) { start = true; break; } if (BoxSphere.Intersects(spheres[i]) && start) { MoveBox(0, false);//The MoveBox function receives the direction (0) and a bool value that dictates whether the box should move or not (false means stop) start = false; break; } if (start /*&& Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton*/ && !BoxSphere.Intersects(spheres[i])) { MoveBox(0, true); break; } } The problem is this: When I use the mouse to move the box (the commented part in the third if condition) the collision works fine (I have another part of code that I removed to simplify my question - it calculates the "address" of the box, and by that number I know that the collision is correct). But when I comment it (like in this example) the box just passes trough the lasers and does not detect the collision (the idea is that the box stops at each laser and the user passes it forth by clicking on the appropriate "switch"). Can you see the problem? Please help, and if you need more informations I will try to give them. Thanks

    Read the article

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

  • OAuth with RestSharp in Windows Phone

    - by midoBB
    Nearly every major API provider uses OAuth for the user authentication and while it is easy to understand the concept, using it in a Windows Phone app isn’t pretty straightforward. So for this quick tutorial we will be using RestSharp for WP7 and the API from getglue.com (an entertainment site) to authorize the user. So the first step is to get the OAuth request token and then we redirect our browserControl to the authorization URL private void StartLogin() {   var client = new RestClient("https://api.getglue.com/"); client.Authenticator = OAuth1Authenticator.ForRequestToken("ConsumerKey", "ConsumerSecret"); var request = new RestRequest("oauth/request_token"); client.ExecuteAsync(request, response => { _oAuthToken = GetQueryParameter(response.Content, "oauth_token"); _oAuthTokenSecret = GetQueryParameter(response.Content, "oauth_token_secret"); string authorizeUrl = "http://getglue.com/oauth/authorize" + "?oauth_token=" + _oAuthToken + "&style=mobile"; Dispatcher.BeginInvoke(() => { browserControl.Navigate(new Uri(authorizeUrl)); }); }); } private static string GetQueryParameter(string input, string parameterName) { foreach (string item in input.Split('&')) { var parts = item.Split('='); if (parts[0] == parameterName) { return parts[1]; } } return String.Empty; } Then we listen to the browser’s Navigating Event private void Navigating(Microsoft.Phone.Controls.NavigatingEventArgs e) { if (e.Uri.AbsoluteUri.Contains("oauth_callback")) { var arguments = e.Uri.AbsoluteUri.Split('?'); if (arguments.Length < 1) return; GetAccessToken(arguments[1]); } } private void GetAccessToken(string uri) { var requestToken = GetQueryParameter(uri, "oauth_token"); var client = new RestClient("https://api.getglue.com/"); client.Authenticator = OAuth1Authenticator.ForAccessToken(ConsumerKey, ConsumerSecret, _oAuthToken, _oAuthTokenSecret); var request = new RestRequest("oauth/access_token"); client.ExecuteAsync(request, response => { AccessToken = GetQueryParameter(response.Content, "oauth_token"); AccessTokenSecret = GetQueryParameter(response.Content, "oauth_token_secret"); UserId = GetQueryParameter(response.Content, "glue_userId"); }); } Now to test it we can access the user’s friends list var client = new RestClient("http://api.getglue.com/v2"); client.Authenticator = OAuth1Authenticator.ForProtectedResource(ConsumerKey, ConsumerSecret, GAccessToken, AccessTokenSecret); var request = new RestRequest("/user/friends"); request.AddParameter("userId", UserId,ParameterType.GetOrPost); // request.AddParameter("category", "all",ParameterType.GetOrPost); client.ExecuteAsync(request, response => { TreatFreindsList(); }); And that’s it now we can access all OAuth methods using RestSharp.

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >