Search Results

Search found 1863 results on 75 pages for 'delay'.

Page 22/75 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Tips for XNA WP7 Developers

    - by Michael B. McLaughlin
    There are several things any XNA developer should know/consider when coming to the Windows Phone 7 platform. This post assumes you are familiar with the XNA Framework and with the changes between XNA 3.1 and XNA 4.0. It’s not exhaustive; it’s simply a list of things I’ve gathered over time. I may come back and add to it over time, and I’m happy to add anything anyone else has experienced or learned as well. Display · The screen is either 800x480 or 480x800. · But you aren’t required to use only those resolutions. · The hardware scaler on the phone will scale up from 240x240. · One dimension will be capped at 800 and the other at 480; which depends on your code, but you cannot have, e.g., an 800x600 back buffer – that will be created as 800x480. · The hardware scaler will not normally change aspect ratio, though, so no unintended stretching. · Any dimension (width, height, or both) below 240 will be adjusted to 240 (without any aspect ratio adjustment such that, e.g. 200x240 will be treated as 240x240). · Dimensions below 240 will be honored in terms of calculating whether to use portrait or landscape. · If dimensions are exactly equal or if height is greater than width then game will be in portrait. · If width is greater than height, the game will be in landscape. · Landscape games will automatically flip if the user turns the phone 180°; no code required. · Default landscape is top = left. In other words a user holding a phone who starts a landscape game will see the first image presented so that the “top” of the screen is along the right edge of his/her phone, such that the natural behavior would be to turn the phone 90° so that the top of the phone will be held in the user’s left hand and the bottom would be held in the user’s right hand. · The status bar (where the clock, battery power, etc., are found) is hidden when the Game-derived class sets GraphicsDeviceManager.IsFullScreen = true. It is shown when IsFullScreen = false. The default value is false (i.e. the status bar is shown). · You should have a good reason for hiding the status bar. Users find it helpful to know what time it is, how much charge their battery has left, and whether or not their phone is in service range. This is especially true for casual games that you expect someone to play for a few minutes at a time, e.g. while waiting for some event to start, for a phone call to come in, or for a train, bus, or subway to arrive. · In portrait mode, the status bar occupies 32 pixels of space. This means that a game with a back buffer of 480x800 will be scaled down to occupy approximately 461x768 screen pixels. Setting the back buffer to 480x768 (or some resolution with the same 0.625 aspect ratio) will avoid this scaling. · In landscape mode, the status bar occupies 72 pixels of space. This means that a game with a back buffer of 800x480 will be scaled down to occupy approximately 728x437 screen pixels. Setting the back buffer to 728x480 (or some resolution with the same 1.51666667 aspect ratio) will avoid this scaling. Input · Touch input is scaled with screen size. · So if your back buffer is 600x360, a tap in the bottom right corner will come in as (599,359). You don’t need to do anything special to get this automatic scaling of touch behavior. · If you do not use full area of the screen, any touch input outside the area you use will still register as a touch input. For example, if you set a portrait resolution of 240x240, it would be scaled up to occupy a 480x480 area, centered in the screen. If you touch anywhere above this area, you will get a touch input of (X,0) where X is a number from 0 to 239 (in accordance with your 240 pixel wide back buffer). Any touch below this area will give a touch input of (X,239). · If you keep the status bar visible, touches within its area will not be passed to your game. · In general, a screen measurement is the diagonal. So a 3.5” screen is 3.5” long from the bottom right corner to the top left corner. With an aspect ratio of 0.6 (480/800 = 0.6), this means that a phone with a 3.5” screen is only approximately 1.8” wide by 3” tall. So there are approximately 267 pixels in an inch on a 3.5” screen. · Again, this time in metric! 3.5 inches is approximately 8.89 cm. So an 8.89 cm screen is 8.89 cm long from the bottom right corner to the top left corner. With an aspect ratio of 0.6, this means that a phone with an 8.89 cm screen is only approximately 4.57 cm wide by 7.62 cm tall. So there are approximately 105 pixels in a centimeter on an 8.89 cm screen. · Think about the size of your finger tip. If you do not have large hands, think about the size of the fingertip of someone with large hands. Consider that when you are sizing your touch input. Especially consider that when you are spacing two touch targets near one another. You need to judge it for yourself, but items that are next to each other and are each 100x100 should be fine when it comes to selecting items individually. Smaller targets than that are ok provided that you leave space between them. · You want your users to have a pleasant experience. Making touch controls too small or too close to one another will make them nervous about whether they will touch the right target. Take this into account when you plan out your game initially. If possible, do some quick size mockups on an actual phone using colored rectangles that you position and size where you plan to have your game controls. Adjust as necessary. · People do not have transparent hands! Nor are their hands the size of a mouse pointer icon. Consider leaving a dedicated space for input rather than forcing the user to cover up to one-third of the screen with a finger just to play the game. · Another benefit of designing your controls to use a dedicated area is that you’re less likely to have players moving their finger(s) so frantically that they accidentally hit the back button, start button, or search button (many phones have one or more of these on the screen itself – it’s easy to hit one by accident and really annoying if you hit, e.g., the search button and then quickly tap back only to find out that the game didn’t save your progress such that you just wasted all the time you spent playing). · People do not like doing somersaults in order to move something forward with accelerometer-based controls. Test your accelerometer-based controls extensively and get a lot of feedback. Very well-known games from noted publishers have created really bad accelerometer controls and been virtually unplayable as a result. Also be wary of exceptions and other possible failures that the documentation warns about. · When done properly, the accelerometer can add a nice touch to your game (see, e.g. ilomilo where the accelerometer was used to move the background; it added a nice touch without frustrating the user; I also think CarniVale does direct accelerometer controls very well). However, if done poorly, it will make your game an abomination unto the Marketplace. Days, weeks, perhaps even months of development time that you will never get back. I won’t name names; you can search the marketplace for games with terrible reviews and you’ll find them. Graphics · The maximum frame rate is 30 frames per second. This was set as a compromise between battery life and quality. · At least one model of phone is known to have a screen refresh rate that is between 59 and 60 hertz. Because of this, using a fixed time step with a target frame rate of 30 will cause a slight internal delay to build up as the framework is forced to wait slightly for the next refresh. Eventually the delay will get to the point where a draw is skipped in order to recover from the delay. (See Nick's comment below for clarification.) · To deal with that delay, you can either stay with a fixed time step and set the frame rate slightly lower or else you can go to a variable time step and make sure to adjust all of your update data (e.g. player movement distance) to take into account the elapsed time from the last update. A variable time step makes your update logic slightly more complicated but will avoid frame skips entirely. · Currently there are no custom shaders. This might change in the future (there is no hardware limitation preventing it; it simply wasn’t a feature that could be implemented in the time available before launch). · There are five built-in shaders. You can create a lot of nice effects with the built-in shaders. · There is more power on the CPU than there is on the GPU so things you might typically off-load to the GPU will instead make sense to do on the CPU side. · This is a phone. It is not a PC. It is not an Xbox 360. The emulator runs on a PC and uses the full power of your PC. It is very good for testing your code for bugs and doing early prototyping and layout. You should not use it to measure performance. Use actual phone hardware instead. · There are many phone models, each of which has slightly different performance levels for I/O, screen blitting, CPU performance, etc. Do not take your game right to the performance limit on your phone since for some other phones you might be crossing their limits and leaving players with a bad experience. Leave a cushion to account for hardware differences. · Smaller screened phones will have slightly more dots per inch (dpi). Larger screened phones will have slightly less. Either way, the dpi will be much higher than the typical 96 found on most computer screens. Make sure that whoever is doing art for your game takes this into account. · Screens are only required to have 16 bit color (65,536 colors). This is common among smart phones. Using gradients on a 16 bit display can produce an ugly artifact known as banding. Banding is when, rather than a smooth transition from one color to another, you instead see distinct lines. Be careful to avoid this when possible. Banding can be avoided through careful art creation. Its effects can be minimized and even unnoticeable when the texture in question is always moving. You should be careful not to rely on “looks good on my phone” since some phones do have 32-bit displays and thus you’ll find yourself wondering why you’re getting bad reviews that complain about the graphics. Avoid gradients; if you can’t, make sure they are 16-bit safe. Audio · Never rely on sounds as your sole signal to the player that something is happening in the game. They might have the sound off. They might be playing somewhere loud. Etc. · You have to provide controls to disable sound & music. These should be separate. · On at least one model of phone, the volume control API currently has no effect. Players can adjust sound with their hardware volume buttons, but in game selectors simply won’t work. As such, it may not be worth the effort of providing anything beyond on/off switches for sound and music. · MediaPlayer.GameHasControl will return true when a game is hooked up to a PC running Zune. When Zune is running, any attempts to do anything (beyond check GameHasControl) with MediaPlayer will cause an exception to be thrown. If this exception is thrown, catch it and disable music. Exceptions take time to propagate; you don’t want one popping up in every single run of your game’s Update method. · Remember that players can already be listening to music or using the FM radio. In this case GameHasControl will be false and you should handle this appropriately. You can, alternately, ask the player for permission to stop their current music and play your music instead, but the (current) requirement that you restore their music when done is very hard (if not impossible) to deal with. · You can still play sound effects even when the game doesn’t have control of the music, but don’t think this is a backdoor to playing music. Your game will fail certification if your “sound effect” seems to be more like music in scope and length.

    Read the article

  • Camera doesn't move

    - by hugo
    Here is my code, as my subject indicates i have implemented a camera but I couldn't make it move. #define PI_OVER_180 0.0174532925f #define GL_CLAMP_TO_EDGE 0x812F #include "metinalifeyyaz.h" #include <GL/glu.h> #include <GL/glut.h> #include <QTimer> #include <cmath> #include <QKeyEvent> #include <QWidget> #include <QDebug> metinalifeyyaz::metinalifeyyaz(QWidget *parent) : QGLWidget(parent) { this->setFocusPolicy(Qt:: StrongFocus); time = QTime::currentTime(); timer = new QTimer(this); timer->setSingleShot(true); connect(timer, SIGNAL(timeout()), this, SLOT(updateGL())); xpos = yrot = zpos = 0; walkbias = walkbiasangle = lookupdown = 0.0f; keyUp = keyDown = keyLeft = keyRight = keyPageUp = keyPageDown = false; } void metinalifeyyaz::drawBall() { //glTranslatef(6,0,4); glutSolidSphere(0.10005,300,30); } metinalifeyyaz:: ~metinalifeyyaz(){ glDeleteTextures(1,texture); } void metinalifeyyaz::initializeGL(){ glShadeModel(GL_SMOOTH); glClearColor(1.0,1.0,1.0,0.5); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); glDepthFunc(GL_LEQUAL); glClearColor(1.0,1.0,1.0,1.0); glShadeModel(GL_SMOOTH); GLfloat mat_specular[]={1.0,1.0,1.0,1.0}; GLfloat mat_shininess []={30.0}; GLfloat light_position[]={1.0,1.0,1.0}; glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT,GL_SHININESS,mat_shininess); glLightfv(GL_LIGHT0, GL_POSITION, light_position); glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); QImage img1 = convertToGLFormat(QImage(":/new/prefix1/halisaha2.bmp")); QImage img2 = convertToGLFormat(QImage(":/new/prefix1/white.bmp")); glGenTextures(2,texture); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img1.width(), img1.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img1.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img2.width(), img2.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img2.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really nice perspective calculations } void metinalifeyyaz::resizeGL(int w, int h){ if(h==0) h=1; glViewport(0,0,w,h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, static_cast<GLfloat>(w)/h,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void metinalifeyyaz::paintGL(){ movePlayer(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); GLfloat xtrans = -xpos; GLfloat ytrans = -walkbias - 0.50f; GLfloat ztrans = -zpos; GLfloat sceneroty = 360.0f - yrot; glLoadIdentity(); glRotatef(lookupdown, 1.0f, 0.0f, 0.0f); glRotatef(sceneroty, 0.0f, 1.0f, 0.0f); glTranslatef(xtrans, ytrans+50, ztrans-130); glLoadIdentity(); glTranslatef(1.0f,0.0f,-18.0f); glRotatef(45,1,0,0); drawScene(); int delay = time.msecsTo(QTime::currentTime()); if (delay == 0) delay = 1; time = QTime::currentTime(); timer->start(qMax(0,10 - delay)); } void metinalifeyyaz::movePlayer() { if (keyUp) { xpos -= sin(yrot * PI_OVER_180) * 0.5f; zpos -= cos(yrot * PI_OVER_180) * 0.5f; if (walkbiasangle >= 360.0f) walkbiasangle = 0.0f; else walkbiasangle += 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } else if (keyDown) { xpos += sin(yrot * PI_OVER_180)*0.5f; zpos += cos(yrot * PI_OVER_180)*0.5f ; if (walkbiasangle <= 7.0f) walkbiasangle = 360.0f; else walkbiasangle -= 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } if (keyLeft) yrot += 0.5f; else if (keyRight) yrot -= 0.5f; if (keyPageUp) lookupdown -= 0.5; else if (keyPageDown) lookupdown += 0.5; } void metinalifeyyaz::keyPressEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_Escape: close(); break; case Qt::Key_F1: setWindowState(windowState() ^ Qt::WindowFullScreen); break; default: QGLWidget::keyPressEvent(event); case Qt::Key_PageUp: keyPageUp = true; break; case Qt::Key_PageDown: keyPageDown = true; break; case Qt::Key_Left: keyLeft = true; break; case Qt::Key_Right: keyRight = true; break; case Qt::Key_Up: keyUp = true; break; case Qt::Key_Down: keyDown = true; break; } } void metinalifeyyaz::changeEvent(QEvent *event) { switch (event->type()) { case QEvent::WindowStateChange: if (windowState() == Qt::WindowFullScreen) setCursor(Qt::BlankCursor); else unsetCursor(); break; default: break; } } void metinalifeyyaz::keyReleaseEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_PageUp: keyPageUp = false; break; case Qt::Key_PageDown: keyPageDown = false; break; case Qt::Key_Left: keyLeft = false; break; case Qt::Key_Right: keyRight = false; break; case Qt::Key_Up: keyUp = false; break; case Qt::Key_Down: keyDown = false; break; default: QGLWidget::keyReleaseEvent(event); } } void metinalifeyyaz::drawScene(){ glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,1.0f); // glColor3f(0,0,1); //back glVertex3f(-6,0,-4); glVertex3f(-6,-0.5,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,-1.0f); //front glVertex3f(6,0,4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,0,4); glEnd(); glBegin(GL_QUADS); glNormal3f(-1.0f,0.0f,0.0f); // glColor3f(0,0,1); //left glVertex3f(-6,0,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); // glColor3f(0,0,1); //right glVertex3f(6,0,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(6,0,4); glEnd(); glBindTexture(GL_TEXTURE_2D, texture[0]); glBegin(GL_QUADS); glNormal3f(0.0f,1.0f,0.0f);//top glTexCoord2f(1.0f,0.0f); glVertex3f(6,0,-4); glTexCoord2f(1.0f,1.0f); glVertex3f(6,0,4); glTexCoord2f(0.0f,1.0f); glVertex3f(-6,0,4); glTexCoord2f(0.0f,0.0f); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,-1.0f,0.0f); //glColor3f(0,0,1); //bottom glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glEnd(); // glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture[1]); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); glTexCoord2f(1.0f,0.0f); //right far goal post front face glVertex3f(5,0.5,-0.95); glTexCoord2f(1.0f,1.0f); glVertex3f(5,0,-0.95); glTexCoord2f(0.0f,1.0f); glVertex3f(5,0,-1); glTexCoord2f(0.0f,0.0f); glVertex3f(5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(5,0.5,-1); glVertex3f(5,0,-1); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5,0,-0.95); glVertex3f(5, 0.5, -0.95); glColor3f(1,1,1); //right near goal post front face glVertex3f(5,0.5,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0,1); glVertex3f(5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(5,0.5,1); glVertex3f(5,0,1); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0.5, 0.95); glColor3f(1,1,1); //right crossbar front face glVertex3f(5,0.55,-1); glVertex3f(5,0.55,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5.05,0.5,1); glVertex3f(5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(5.05,0.5,-1); glVertex3f(5.05,0.5,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5,0.55,1); glVertex3f(5,0.55,-1); glColor3f(1,1,1); //left far goal post front face glVertex3f(-5,0.5,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5,0,-1); glVertex3f(-5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(-5,0.5,-1); glVertex3f(-5,0,-1); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5, 0.5, -0.95); glColor3f(1,1,1); //left near goal post front face glVertex3f(-5,0.5,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0,1); glVertex3f(-5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(-5,0.5,1); glVertex3f(-5,0,1); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0.5, 0.95); glColor3f(1,1,1); //left crossbar front face glVertex3f(-5,0.55,-1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5.05,0.5,1); glVertex3f(-5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(-5.05,0.5,-1); glVertex3f(-5.05,0.5,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.55,-1); glEnd(); // glPopMatrix(); // glPushMatrix(); // glTranslatef(0,0,0); // glutSolidSphere(0.10005,500,30); // glPopMatrix(); }

    Read the article

  • Silverlight SOS (Son of Strike) documenation

    - by Kris Erickson
    Is there any microsoft or even non-official documentation for SOS for Silverlight. Other than a few web posts I have seen zero documentation for it on MSDN. Even official documentation for the CLR version of SOS seems hard to find, this ancient article mentions a sos.htm file that is included in the windows SDK but it doesn't appear to be there any more. Any pointers to debugging Silveright with SOS? I have found the following blog posts but am looking for more information: http://davybrion.com/blog/2009/08/finding-memory-leaks-in-silverlight-with-windbg/ http://www.ningzhang.org/2008/12/19/silverlight-debugging-with-windbg-and-sos/ http://debuggingblog.com/wp/2009/07/07/windbg-extension-sos-in-clr-40net-framework-40-ctp-net-runtime-dll-renamed-and-sos-commands-just-got-richer/ http://www.netfxharmonics.com/label/debugging http://blogs.msdn.com/b/tess/archive/2008/08/21/debugging-silverlight-applications-with-windbg-and-sos-dll.aspx http://blogs.msdn.com/b/delay/archive/2009/03/11/where-s-your-leak-at-using-windbg-sos-and-gcroot-to-diagnose-a-net-memory-leak.aspx http://blogs.msdn.com/b/delay/archive/2009/03/09/controls-are-like-diapers-you-don-t-want-a-leaky-one-implementing-the-weakevent-pattern-on-silverlight-with-the-weakeventlistener-class.aspx

    Read the article

  • Spring: PropertyPlaceHolderConfigurer to set values for non-string/integer properties

    - by babyangel86
    Hi, All the examples I have seen where the PropertyPlaceHolderConfigurer is used seem to be setting simple values like Strings and ints. How do you use the PPC to set the values of classes. E.g. If i had a class signature Source(String name, DistributionSample batch, DistributionSample delay) How would I go about setting the batch and delay properties. There is also a small catch. DistributionSample is an abstract class. On the bright side, The class that is using the propertyPlaceHolder knows the beanName of the "Solid" class that needs to be instantiated. Any help would be much appreciated.

    Read the article

  • Detect HTTPHandler Start

    - by Joe Coder Guy
    Is there a way to detect if a httphandler has started transmitting? I'm trying to do large dynamic Excel Exports (in html table format). I can do this, but there's a long delay from the httphandler getting the message and starting the download. I have turned off the output buffer, so the delay seems to be waiting for the SQL server to dump the data into the sqldataset. Anyways, I'd like to send the user to a new page, have the page display a message, and automatically close once the httphandler has started sending the file. Is there a way to detect if the first file headers have been sent? Many thanks in advance!

    Read the article

  • Workflow foundation 4.0 message correlation and error reporting

    - by Lygpt
    I have a workflow service that runs and performs a number of different operations (such as web service calls). If one of these operations fails I call an error reporting web service to notify a seperate system that one of my workflow operations has failed. As the error could be something like the web service being down, I loop and retry this operation until it works. There can be times though when the data I'm passing to this web service is faulty and it needs changing. So I need to be able to hook into this running (but delayed) workflow and change local workflow variables and then re-run the operation. I've looked at message correlation in workflow 4.0 to achieve this but because the delay activity is active in my running workflow instance, any second service call doesn't do anything (it's like the delay activity is blocking any other requests). I've tried setting 'CanCreateInstance' to both true and false but it doesn't help. Thanks!

    Read the article

  • Jquery Clear / reset .Queue() ?

    - by Wes
    I have the following code that triggers two functions whenever a button is clicked. However it only fires the functions once. If i click it again, it does nothing. I need to reset this queue so it fires the functions everytime I click the button. Also, I'm only doing this so I can delay the functions from be fired 1000ms - is there another way to do this? $('#play').click(function() { // other code.. $(this).delay(1000).queue(function(){ countHourly(); countFlights() }); });

    Read the article

  • Only perform jquery effects/operations on certain pages

    - by Galen
    Up until now i've been dropping all my jquery code right inside the document.ready function. I'm thinking that for certain situations this isnt the best way to go. for example: If i want an animation to perform when a certain page loads what is the best way to go about that. $(document).ready(function() { $("#element_1").fadeIn(); $("#element_2").delay('100').fadeIn(); $("#element_3").delay('200').fadeIn(); }); If this is right inside of document.ready then every time ANY page loads it's going to check each line and look for that element. What is the best way to tell jquery to only perform a chunk of code on a certain page to avoid this issue.

    Read the article

  • Phonegap: Slow response with vibrate notification

    - by jjei
    I created a simple android application with jquery and phonegap. When testing the app with phone, I noticed that vibration effect, that I have used to indicate that user touches a button, comes after a delay of maybe 0,5 seconds. This is way too long delay and just confuses the user. Is this just the downside of using phonegap? Or is there any configuration or additional frameworks which could be used to make the app response and produce the vibration more quickly? I installed the vibration plugin like this: phonegap local plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-vibration.git I use the code below to create the vibration effect. navigator.notification.vibrate(200); My phone gap version is 3.0.0-0.14.3

    Read the article

  • Reused UIWebView showing previous loaded content for a brief second on iPhone

    - by Roi
    In one of my apps I reuse a webview. Each time the user enters a certain view on reload cached data to the webview using the method - (void)loadData:(NSData *)data MIMEType:(NSString *)MIMEType textEncodingName:(NSString *)encodingName baseURL:(NSURL *)baseURL and I wait for the callback call - (void) webViewDidFinishLoad:(UIWebView *)webView. In the mean time I hide the webview and show a 'loading' label. Only when I receive webViewDidFinishLoad do I show the webview. Many times what happens is I see the previous data that was loaded to the webview for a brief second before the new data I loaded kicks in. I already added a delay of 0.2 seconds before showing the webview but it didn't help. Instead of solving this by adding more time to the delay does anyone know how to solve this issue or maybe clear old data from a webview without release and allocating it every time?

    Read the article

  • java updating text area

    - by n0ob
    for my application I have to build a little customized time ticker which ticks over after whatever delay I tell it to and writes the new value in my textArea. The problem is that the ticker is running fully until the termination time and then printing all the values. How can I make the text area change while the code is running. while(tick<terminationTime){ if ((System.currentTimeMillis()) > (msNow + delay)){ msNow = System.currentTimeMillis(); tick = tick + 1; currentTime.setText(""+tick); sourceTextArea.append(""+tick+" " + System.currentTimeMillis() +" \n"); } currentTime and sourceTextArea are both text areas and both are getting updated after the while loop ends.

    Read the article

  • Ajax doesn't trigger a change-event on a webkit based browser

    - by user319464
    I have adapted a Jquery plugin to for-fill my needs to send GET requests to servers as a way of "pinging" them. I've also added some javascript code to add some fancy features like: depending on the changed value in a that the Jquery plugin changes, it changes the Icon accordingly. To make it all work essentially, I made so that when Ajax gets a "complete" event, it forces a "onChange" event to the span, triggering the javascript validation function to change the status icons. Here is the code of my slightly modified jQuery Plugin: /** * ping for jQuery * * Adapted by Carroarmato0 (to actually work instead of randomly "pinging" nowhere instead of faking * * @auth Jessica * @link http://www.skiyo.cn/demo/jquery.ping/ * */ (function($) { $.fn.ping = function(options) { var opts = $.extend({}, $.fn.ping.defaults, options); return this.each(function() { var ping, requestTime, responseTime ; var target = $(this); var server = target.html(); target.html('<img src="img/loading.gif" alt="loading" />'); function ping() { $.ajax({url: 'http://' + server, type: 'GET', dataType: 'html', timeout: 30000, beforeSend : function() { requestTime = new Date().getTime(); }, complete : function() { responseTime = new Date().getTime(); ping = Math.abs(requestTime - responseTime); if (ping > 2000) { target.text('niet bereikbaar'); } else { target.text(ping + opts.unit); } target.change(); } }); } ping(); opts.interval != 0 && setInterval(ping,opts.interval * 1000); }); }; $.fn.ping.defaults = { interval: 3, unit: 'ms' }; })(jQuery); target.change(); is the code that triggers the "onchange" event in the span: echo " <td class=\"center\"><span id=\"ping$pingNb\" onChange=\"checkServerIcon(this)\" >" .$server['IP'] . "</span></td>"; In Firefox this works, checkServerIcon(this) gets executed and passes the span object to the function. function checkServerIcon(object) { var delayText = object.innerHTML; var delay = delayText.substring(0, delayText.length - 2); if ( isInteger(delay) ) { object.parentNode.previousSibling.parentNode.getElementsByTagName('img')[0].src = 'img/servers/enable_server.png'; } else { if (delay == "bezig.") { object.parentNode.previousSibling.parentNode.getElementsByTagName('img')[0].src = 'img/servers/search_server.png'; } else { object.parentNode.previousSibling.parentNode.getElementsByTagName('img')[0].src = 'img/servers/desable_server.png'; } } } My guess would be that there's something different in WebKit browsers in the way object.parentNode.previousSibling.parentNode. .... works...

    Read the article

  • Abort a slow flush to disk after write?

    - by Therealstubot
    Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc? I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.

    Read the article

  • Using Unix Process Controll Methods in Ruby

    - by John F. Miller
    Ryan Tomayko touched off quite a fire storm with this post about using Unix process control commands. We should be doing more of this. A lot more of this. I'm talking about fork(2), execve(2), pipe(2), socketpair(2), select(2), kill(2), sigaction(2), and so on and so forth. These are our friends. They want so badly just to help us. I have a bit of code (a delayed_job clone for DataMapper that I think would fit right in with this, but I'm not clear on how to take advantage of the listed commands. Any Ideas on how to improve this code? def start say "*** Starting job worker #{@name}" t = Thread.new do loop do delay = Update.work_off(self) break if $exit sleep delay break if $exit end clear_locks end trap('TERM') { terminate_with t } trap('INT') { terminate_with t } trap('USR1') do say "Wakeup Signal Caught" t.run end end

    Read the article

  • Reading Metadata property of GifBitmapDecoder...why is it null?

    - by David
    How can I read the delay, left and top offset data for each frame of a gif? I've gotten this far. Load the Gif var myGif = new GifBitmapDecoder(uri, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad); Get a frame var frame = myGif.Frames[i]; From MSDN: Native Image Format Metadata Queries read (ushort)Metadata.GetQuery("/grctlext/Delay"), (ushort)Metadata.GetQuery("/imgdesc/Left"), (ushort)Metadata.GetQuery("/imgdesc/Top") But two things don't work. First the Metadata property of both the gif and the frame are always null, even if I try different animated gif files. Second, the Metadata property of the frame doesn't seem to have a GetQuery method. How do I run these queries, what did I miss? Edit: Here is sample code that gives me null metadata. Using a fresh install of VS2010 Premium, on a fresh WPF application. The image file is the one in the comments. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; namespace WpfApplication1 { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); var uri = new Uri(@"c:\b-414328-animated_gif_.gif"); var myGif = new GifBitmapDecoder(uri, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad); var frame = myGif.Frames[0]; Title = ""; Title += "Global Metadata is null: " + (myGif.Metadata == null).ToString(); Title += "; Frame Metadata is null: " + (frame.Metadata == null).ToString(); // Crash due to null metadata //var frameData = (BitmapMetadata)frame.Metadata; //var rate = (ushort)frameData.GetQuery("/grctlext/Delay"); } } }

    Read the article

  • Add multiples Picturebox using ThreadPool

    - by Pmdusso
    Hi! Im doing a Naval Battle for University. I decided to do it in C#. My board is 20 x 20 of mini (20x20) PictureBoxes. The problem is when I load the board I got a huuuge delay for draw all of them in the panel which contains them. So I thought to ThreadPool my method to escale the picuteres boxes creation and drawing fester. Is this the correct aproach? I'm wondering if even if I launch 20 threads to create and set the picturesoboxes together I will still have the graphic delay. (I wont past code right now because maybe the answer independs of it... if not, I past next :) Sorry the bad english, Thanks folks!

    Read the article

  • Prefered method for looping sound flash as3

    - by Brian Heylin
    Hi there, I'm having some issues with looping a sound in flash AS3, in that when I tell the sound to loop I get a slight delay at the end/beginning of the audio. The audio is clipped correctly and will play without a gap on garage band. I know that there are issues with sound in general in flash, bugs with encodings and the inaccuracies with the SOUND_COMPLETE event (And Adobe should be embarrassed with their handling of these issues) I have tried to use the built in loop argument in the play method on the Sound class and also react on the SOUND_COMPLETE event, but both cause a delay. But has anyone come up with a technique for looping a sound without any noticeable gap?

    Read the article

  • problem in jquery notify bar on submit form using php or zf

    - by user1400
    hi guys in my application on zend framework i use of 'jQuery Notify Bar' plugin for display messages, i'd like to show message when my form submit ,the other page is opened and notify bar be open until the other page is completely load and even some second more but the problem is that notify bar show for short Moment and when the other page begin to load , notify bar is closed, and the delay property is not effect to that $('#myForm').submit(function(){ $.notifyBar({ html: "Thank you, your settings were updated!", **delay: 20000,** animationSpeed: "normal" }); how to show notify bar in the other page too? thanks

    Read the article

  • service broker message process order

    - by Blootac
    Everywhere I read says that messages handled by the service broker are processed in the order that they arrive, and yet if you create a table, message type, contract, service etc , and on activation have a stored proc that waits for 2 seconds and inserts the msg into a table, set the max queue readers to 5 or 10, and send 20 odd messages I can see in the table that they are inserted out of order even though when I insert them into the queue and look at the contents of the queue I can see that the messages are all in the right order. Is it due to the delay waitfor waiting for the nearest second and each thread having different subsecond times and then fighting for a lock or something? The reason i've got a delay in there is to simulate delays with joins etc Thanks demo code: --create the table and service broker CREATE TABLE test ( id int identity(1,1), contents varchar(100) ) CREATE MESSAGE TYPE test CREATE CONTRACT mycontract ( test sent by initiator ) GO CREATE PROCEDURE dostuff AS BEGIN DECLARE @msg varchar(100); RECEIVE TOP (1) @msg = message_body FROM myQueue IF @msg IS NOT NULL BEGIN WAITFOR DELAY '00:00:02' INSERT INTO test(contents)values(@msg) END END GO ALTER QUEUE myQueue WITH STATUS = ON, ACTIVATION ( STATUS = ON, PROCEDURE_NAME = dostuff, MAX_QUEUE_READERS = 10, EXECUTE AS SELF ) create service senderService on queue myQueue ( mycontract ) create service receiverService on queue myQueue ( mycontract ) GO --********************************************************** --now insert lots of messages to the queue DECLARE @dialog_handle uniqueidentifier BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>1</test>'); BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>2</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>3</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>4</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>5</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>6</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>7</test>')

    Read the article

  • ItemsControl that loads items one by one asynchronously.

    - by Vinit Sankhe
    Hey Guys, I am creating a custom DataGrid by deriving the traditional tookit based WPF DataGrid. I want a functionality in the grid to load items one by one asynchronously, wherein as soon as ItemsSource is changed i.e. a new collection is Set to the ItemsSource property or the bound collection is Changed dues to items that rae added, moved or removed (wherein the notifications comes to the data grid when the underlying source implements INotifyCollectionChanged such as ObservableCollection). This is because even with virtualising stackpanel underneath the datagrid takes time to load (2-3 seconds delay) to load the data rows when it has several columns and some are template based. With above behavior that delay would "appear" to have reduced giving datagrid a feel that it has the data and is responsive enough to load it. How can I achieve it? Thx Vinit.

    Read the article

  • How to build a Video Broadcaster, which can handle more than 20,000 viewers

    - by Gaurav Srivastava
    I want to broadcast Video from a web cam, over internet. The problem is, the Video will be viewed live by more than 20,000 people (expected). I have a very little experience with Red5 Broadcasting. I did some broadcasting using Red5 and Flash. It works fine for 1 or 2 viewers i.e. it is great for personal chatting/ video conferencing applications. But, when the number of viewers increases, the delay in Broadcasting also increases. I am experiencing a Delay addition of about 0.5 Seconds for every new user who joins the broadcast. Can any one suggest me some, better technologies on which I can work out this Live Broadcasting. I don't want to use http://www.ustream.com; I want to create one of my own, such tool. But thats always the last solution.

    Read the article

  • Flash CS, reference root from external class

    - by Lotus
    Hi, I made this class and I put it in the same package of Timeline.as (the Document Class): package { import flash.utils.Timer; import flash.events.TimerEvent; public class Counter2 extends Timer { public function Counter2(delay:Number, repeatCount:int=0) { super(delay, repeatCount); super.addEventListener(TimerEvent.TIMER, timerHandler); } public override function start():void { super.start(); } public override function stop():void { super.stop(); } public function timerHandler(evt:TimerEvent) { trace(evt.target.currentCount); } } } This class is instanciated in Timeline.as constructor. Is there any way to reference Timeline(root) from this class? And, if so, how? Thanks!

    Read the article

  • Get the lua command when a c function is called

    - by gamernb
    Supposed I register many different function names in Lua to the same function in C. Now, everytime my C function is called, is there a way to determine which function name was invoked? for example: int runCommand(lua_State *lua) { const char *name = // getFunctionName(lua) ? how would I do this part for(int i = 0; i < functions.size; i++) if(functions[i].name == name) functions[i].Call() } int main() { ... lua_register(lua, "delay", runCommand); lua_register(lua, "execute", runCommand); lua_register(lua, "loadPlugin", runCommand); lua_register(lua, "loadModule", runCommand); lua_register(lua, "delay", runCommand); } So, how do I get the name of what ever function called it?

    Read the article

  • Thread local storage with __declspec(thread) fails in C++/CLI

    - by EFrank
    I'm working on a project where we mix .NET code and native C++ code via a C++/CLI layer. In this solution I want to use Thread Local Storage via the __declspec(thread) declaration: __declspec(thread) int lastId = 0; However, at the first access of the variable, I get a NullReferenceException. To be more precise, the declaration is done within a ref class (a .NET class implemented in C++/CLI). I have already read something about __declspec(thread) does not work with delay loaded DLLs. Am I using delay loaded DLLs automatically if I use .NET?

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >