Search Results

Search found 1471 results on 59 pages for 'surface rt'.

Page 26/59 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Enjoy a Dazzling Desktop with the Brazil Theme for Windows 7

    - by Asian Angel
    Do you love a combination of nature and night-time city photography for your desktop? Then you will definitely want to download a copy of the Brazil Theme for Windows 7. The theme comes with six images featuring the colorful and unique beauty of Brazil. Download the Brazil Theme for Windows 7 [Windows 7 Personalization Gallery] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Windows 8 : combien coûtent les différentes éditions et les mises à jour ? Quelles sont les conditions matérielles requises ?

    Windows 8 : Combien coûtent les différentes éditions Et les mises à jour ? Quelles conditions matérielles requises ? Windows 8, combien ça coûte ? Réponse de Normand : ça dépend. Les versions Plus sérieusement, Windows 8 (hors version RT qui n'est de toute façon pas disponible à l'achat) se présente commercialement sous deux options : la mise à jour, et la boite complète. Chacune étant disponible en deux déclinaisons pour le grand public (développeur compris) : la « normale », et la « Pro ». Auquelles s'ajoute une version « Entreprise » uniquement disponible en licence de volume. Comme tout système d'exploitation, ch...

    Read the article

  • Ray-box Intersection Theory

    - by Myx
    Hello: I wish to determine the intersection point between a ray and a box. The box is defined by its min 3D coordinate and max 3D coordinate and the ray is defined by its origin and the direction to which it points. Currently, I am forming a plane for each face of the box and I'm intersecting the ray with the plane. If the ray intersects the plane, then I check whether or not the intersection point is actually on the surface of the box. If so, I check whether it is the closest intersection for this ray and I return the closest intersection. The way I check whether the plane-intersection point is on the box surface itself is through a function bool PointOnBoxFace(R3Point point, R3Point corner1, R3Point corner2) { double min_x = min(corner1.X(), corner2.X()); double max_x = max(corner1.X(), corner2.X()); double min_y = min(corner1.Y(), corner2.Y()); double max_y = max(corner1.Y(), corner2.Y()); double min_z = min(corner1.Z(), corner2.Z()); double max_z = max(corner1.Z(), corner2.Z()); if(point.X() >= min_x && point.X() <= max_x && point.Y() >= min_y && point.Y() <= max_y && point.Z() >= min_z && point.Z() <= max_z) return true; return false; } where corner1 is one corner of the rectangle for that box face and corner2 is the opposite corner. My implementation works most of the time but sometimes it gives me the wrong intersection. I was wondering if the way I'm checking whether the intersection point is on the box is correct or if I should use some other algorithm. Thanks.

    Read the article

  • SDL doesn't detect Arrow Keys

    - by Scott
    I am working through the SDL tutorials over at http://lazyfoo.net/SDL_tutorials/index.php and I'm stuck on tutorial 8 where I'm working with key presses. I'm using the following code: //Our Main application loop while(!quit){ if(SDL_PollEvent(&curEvents)){ if(curEvents.type == SDL_QUIT){ quit = true; } //If a key was pressed if( curEvents.type == SDL_KEYDOWN ) { //Set the proper message surface switch( curEvents.key.keysym.sym ) { case SDLK_UP: message = upMessage; break; case SDLK_DOWN: message = downMessage; break; case SDLK_LEFT: message = leftMessage; break; case SDLK_RIGHT: message = rightMessage; break; default: message = TTF_RenderText_Solid(font, "Unknown Key", textColor); break; } } } if( message != NULL ) { //Apply the background to the screen applySurface( 0, 0, background, screen ); //Apply the message centered on the screen applySurface( ( SCREEN_WIDTH - message->w ) / 2, ( SCREEN_HEIGHT - message->h ) / 2, message, screen ); //Null the surface pointer message = NULL; } //Update the screen if( SDL_Flip( screen ) == -1 ) { return 1; } } Where works fine, the default case is reached, for everything BUT pressing the arrow keys. I was wondering if someone could spot what I'm doing wrong.

    Read the article

  • SDL_image/C++ OpenGL Program: IMG_Load() produces fuzzy images

    - by Kami
    I'm trying to load an image file and use it as a texture for a cube. I'm using SDL_image to do that. I used this image because I've found it in various file formats (tga, tif, jpg, png, bmp) The code : SDL_Surface * texture; //load an image to an SDL surface (i.e. a buffer) texture = IMG_Load("/Users/Foo/Code/xcode/test/lena.bmp"); if(texture == NULL){ printf("bad image\n"); exit(1); } //create an OpenGL texture object glGenTextures(1, &textureObjOpenGLlogo); //select the texture object you need glBindTexture(GL_TEXTURE_2D, textureObjOpenGLlogo); //define the parameters of that texture object //how the texture should wrap in s direction glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //how the texture should wrap in t direction glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); //how the texture lookup should be interpolated when the face is smaller than the texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //how the texture lookup should be interpolated when the face is bigger than the texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //send the texture image to the graphic card glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels); //clean the SDL surface SDL_FreeSurface(texture); The code compiles without errors or warnings ! I've tired all the files formats but this always produces that ugly result : I'm using : SDL_image 1.2.9 & SDL 1.2.14 with XCode 3.2 under 10.6.2 Does anyone knows how to fix this ?

    Read the article

  • Windows Programming: ID2D1Bitmap Interface - Getting the Bitmap Data

    - by LostInBrackets
    I've been writing my own library of functions to access some of the new Direct2D Windows libraries. In particular, I've been working on the ID2D1Bitmap interface. I wanted to write a function to return a pointer to the start of the bitmap data (for the editing of particular pixels, or custom encoding or whatever else I might wish for in the future). Unfortunately... problem ahead... I can't seem to find a way to get access to the raw pixel data from the ID2D1Bitmap Interface. Does anyone have an idea how to access this? One of my friends suggested drawing the bitmap to a surface and extracting the bitmap data from there. I don't know if this would work. It definitely seems inefficient and I wouldn't know which kind of surface to use. Any help is appreciated. (c++ in particular, but I assume the code won't be tooo different between languages) (I know I could just read in the data direct from the file, but I'm using the WIC decoders which means it could be in any number of indecipherable formats)

    Read the article

  • Use Math class to calculate

    - by LC
    Write a JAVA program that calculates the surface area of a cylinder, C= 2( p r2) + 2 p r h, where r is the radius and h is the height of the cylinder. Allow the user to enter in a radius and a height. Format the output to three decimal places. Use the constant PI and the method pow() from the Math class. This is what I've done so far but it can not run. Can anyone help me ? import Java.util Scanner; public class Ex5 { public static void main (String[] args) { Scanner input.new Scanner(System.in); double radius,height,area; System.out.print("Please enter the radius of the Cylinder"); radius = input.nextDouble(); System.out.print("Please enter the height of the Cylinder"); height = input.nextDouble(); area = (4*Math.PI*radius)+2*Math.PIMath.POW(radius,2); System.out.println("The surface of the cylinder" + area); } }

    Read the article

  • Designer serialization persistence problem in .NET, Windows Forms

    - by Jules
    ETA: I have a similar, smaller, problem here which, I suspect, is related to this problem. I have a class which has a readonly property that holds a collection of components (* not quite, see below). At design time, it's possible to select from the components on the design surface to add to the collection. (Think imagelist, but instead of selecting one, you can select as many as you want.) As a test, I inherit from button and attach my class to it as a property. The persistence problem occurs when I add a component,to the collection, from the design surface after I have added my button to the form. The best way to demonstrate this is to show you the designer generated code: Private Sub InitializeComponent() Dim Provider1 As WindowsApplication1.Provider = New WindowsApplication1.Provider Me.MyComponent2 = New WindowsApplication1.MyComponent Me.MyComponent1 = New WindowsApplication1.MyComponent Me.MyButton1 = New WindowsApplication1.MyButton Me.MyComponent3 = New WindowsApplication1.MyComponent Me.SuspendLayout() ' 'MyButton1 ' Me.MyButton1.ProviderCollection.Add(Me.MyButton1.InternalProvider) Me.MyButton1.ProviderCollection.Add(Me.MyComponent1.Provider) Me.MyButton1.ProviderCollection.Add(Me.MyComponent2.Provider) Me.MyButton1.ProviderCollection.Add(Provider1) //Wrong should be Me.MyComponent3.Provider ' 'Form1 ' Me.Controls.Add(Me.MyButton1) End Sub Friend WithEvents MyComponent1 As WindowsApplication1.MyComponent Friend WithEvents MyComponent2 As WindowsApplication1.MyComponent Friend WithEvents MyButton1 As WindowsApplication1.MyButton Friend WithEvents MyComponent3 As WindowsApplication1.MyComponent End Class As you can see from the code, the collection is not actually a collection of the components, but a collection of a property, 'Provider', from the components. It looks like the problem is occurring because MyComponent3 is created after MyButton. However, in my opinion, this should not make any difference - by the time the serializer comes to add the provider property of MyComponent3, it's already created. Note: You may wonder, why I'm not using AddRange to persist the collection. The reason for this is that if I do, the behaviour changes and none of the items will persist correctly. The designer will create local fields - like Provider1 - for each item in the collection. However if I add another collection to the class which holds the actual MyComponents and persist this, then, somehow, the AddRange method persists correctly in ProviderCollection! There seems to be some kind of quantum double slit experiment going down in code dom. How can I solve this problem?

    Read the article

  • Custom Modal Window in WPF?

    - by Dan Bryant
    I have a WPF application where I'd like to create a custom pop-up that has modal behavior. I've been able to hack up a solution using an equivalent to 'DoEvents', but is there a better way to do this? Here is what I have currently: private void ShowModalHost(FrameworkElement element) { //Create new modal host var host = new ModalHost(element); //Lock out UI with blur WindowSurface.Effect = new BlurEffect(); ModalSurface.IsHitTestVisible = true; //Display control in modal surface ModalSurface.Children.Add(host); //Block until ModalHost is done while (ModalSurface.IsHitTestVisible) { DoEvents(); } } private void DoEvents() { var frame = new DispatcherFrame(); Dispatcher.BeginInvoke(DispatcherPriority.Background, new DispatcherOperationCallback(ExitFrame), frame); Dispatcher.PushFrame(frame); } private object ExitFrame(object f) { ((DispatcherFrame)f).Continue = false; return null; } public void CloseModal() { //Remove any controls from the modal surface and make UI available again ModalSurface.Children.Clear(); ModalSurface.IsHitTestVisible = false; WindowSurface.Effect = null; } Where my ModalHost is a user control designed to host another element with animation and other support.

    Read the article

  • GDI+ Rotated sub-image

    - by Andrew Robinson
    I have a rather large (30MB) image that I would like to take a small "slice" out of. The slice needs to represent a rotated portion of the original image. The following works but the corners are empty and it appears that I am taking a rectangular area of the original image, then rotating that and drawing it on an unrotated surface resulting in the missing corners. What I want is a rotated selection on the original image that is then drawn on an unrotated surface. I know I can first rotate the original image to accomplish this but this seems inefficient given its size. Any suggestions? Thanks, public Image SubImage(Image image, int x, int y, int width, int height, float angle) { var bitmap = new Bitmap(width, height); using (Graphics graphics = Graphics.FromImage(bitmap)) { graphics.TranslateTransform(bitmap.Width / 2.0f, bitmap.Height / 2.0f); graphics.RotateTransform(angle); graphics.TranslateTransform(-bitmap.Width / 2.0f, -bitmap.Height / 2.0f); graphics.DrawImage(image, new Rectangle(0, 0, width, height), x, y, width, height, GraphicsUnit.Pixel); } return bitmap; }

    Read the article

  • How to calculate the normal of points on a 3D cubic Bézier curve given normals for its start and end points?

    - by Robert
    I'm trying to render a "3D ribbon" using a single 3D cubic Bézier curve to describe it (the width of the ribbon is some constant). The first and last control points have a normal vector associated with them (which are always perpendicular to the tangents at those points, and describe the surface normal of the ribbon at those points), and I'm trying to smoothly interpolate the normal vector over the course of the curve. For example, given a curve which forms the letter 'C', with the first and last control points both having surface normals pointing upwards, the ribbon should start flat, parallel to the ground, slowly turn, and then end flat again, facing the same way as the first control point. To do this "smoothly", it would have to face outwards half-way through the curve. At the moment (for this case), I've only been able to get all the surfaces facing upwards (and not outwards in the middle), which creates an ugly transition in the middle. It's quite hard to explain, I've attached some images below of this example with what it currently looks like (all surfaces facing upwards, sharp flip in the middle) and what it should look like (smooth transition, surfaces slowly rotate round). Silver faces represent the front, black faces the back. Incorrect, what it currently looks like: Correct, what it should look like: All I really need is to be able to calculate this "hybrid normal vector" for any point on the 3D cubic bézier curve, and I can generate the polygons no problem, but I can't work out how to get them to smoothly rotate round as depicted. Completely stuck as to how to proceed!

    Read the article

  • How to avoid mouse move on Touch

    - by VirtualBlackFox
    I have a WPF application that is capable of being used both with a mouse and using Touch. I disable all windows "enhancements" to just have touch events : Stylus.IsPressAndHoldEnabled="False" Stylus.IsTapFeedbackEnabled="False" Stylus.IsTouchFeedbackEnabled="False" Stylus.IsFlicksEnabled="False" The result is that a click behave like I want except on two points : The small "touch" cursor (little white star) appears where clicked an when dragging. Completely useless as the user finger is already at this location no feedback is required (Except my element potentially changing color if actionable). Elements stay in the "Hover" state after the movement / Click ends. Both are the consequences of the fact that while windows transmit correctly touch events, he still move the mouse to the last main-touch-event. I don't want windows to move the mouse at all when I use touch inside my application. Is there a way to completely avoid that? Notes: Handling touch events change nothing to this. Using SetCursorPos to move the mouse away make the cursor blink and isn't really user-friendly. Disabling the touch panel to act as an input device completely disable all events (And I also prefer an application-local solution, not system wide). I don't care if the solution involve COM/PInvoke or is provided in C/C++ i'll translate. If it is necessary to patch/hook some windows dlls so be it, the software will run on a dedicated device anyway. I'm investigating the surface SDK but I doubt that it'll show any solution. As a surface is a pure-touch device there is no risk of bad interaction with the mouse.

    Read the article

  • Blit SDL_Surface onto another SDL_Surface and apply a colorkey

    - by NordCoder
    I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT-POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work. Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]): // Create an empty surface with the same settings as the original image SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height, image->format->BitsPerPixel, #if SDL_BYTEORDER == SDL_BIG_ENDIAN 0xff000000, 0x00ff0000, 0x0000ff00, 0x000000ff #else 0x000000ff, 0x0000ff00, 0x00ff0000, 0xff000000 #endif ); // Map RGBA color to pixel format value Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format, static_cast<Uint8>(colorKey.R * 255), static_cast<Uint8>(colorKey.G * 255), static_cast<Uint8>(colorKey.B * 255), static_cast<Uint8>(colorKey.A * 255)); SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat); // Blit the image onto the padded image SDL_BlitSurface(image, NULL, paddedImage, NULL); SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat); Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem. I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?

    Read the article

  • Designing for the future

    - by Dennis Vroegop
    User interfaces and user experience design is a fast moving field. It’s something that changes pretty quick: what feels fresh today will look outdated tomorrow. I remember the day I first got a beta version of Windows 95 and I felt swept away by the user interface of the OS. It felt so modern! If I look back now, it feels old. Well, it should: the design is 17 years old which is an eternity in our field. Of course, this is not limited to UI. Same goes for many industries. I want you to think back of the cars that amazed you when you were in your teens (if you are in your teens then this may not apply to you). Didn’t they feel like part of the future? Didn’t you think that this was the ultimate in designs? And aren’t those designs hopelessly outdated today (again, depending on your age, it may just be me)? Let’s review the Win95 design: And let’s compare that to Windows 7: There are so many differences here, I wouldn’t even know where to start explaining them. The general feeling however is one of more usability: studies have shown Windows 7 is much easier to understand for new users than the older versions of Windows did. Of course, experienced Windows users didn’t like it: people are usually afraid of changes and like to stick to what they know. But for new users this was a huge improvement. And that is what UX design is all about: make a product easier to use, with less training required and make users feel more productive. Still, there are areas where this doesn’t hold up. There are plenty examples of designs from the past that are still fresh today. But if you look closely at them, you’ll notice some subtle differences. This differences are what keep the designs fresh. A good example is the signs you’ll find on the road. They haven’t changed much over the years (otherwise people wouldn’t recognize them anymore) but they have been changing gradually to reflect changes in traffic. The same goes for computer interfaces. With each new product or version of a product, the UI and UX is changed gradually. Every now and then however, a bigger change is needed. Just think about the introduction of the Ribbon in Microsoft Office 2007: the whole UI was redesigned. A lot of old users (not in age, but in times of using older versions) didn’t like it a bit, but new users or casual users seem to be more efficient using the product. Which, of course, is exactly the reason behind the changes. I believe that a big engine behind the changes in User Experience design has been the web. In the old days (i.e. before the explosion of the internet) user interface design in Windows applications was limited to choosing the margins between your battleship gray buttons. When the web came along, and especially the web 2.0 where the browsers started to act more and more as application platforms, designers stepped in and made a huge impact. In the browser, they could do whatever they wanted. In the beginning this was limited to the darn blink tag but gradually people really started to think about UX. Even more so: the design of the UI and the whole experience was taken away from the developers and put into the hands of people who knew what they were doing: UX designers. This caused some problems. Everyone who has done a web project in the early 2000’s must have had the same experience: the designers give you a set of Photoshop files and tell you to translate it to HTML. Which, of course, is very hard to do. However, with new tooling and new standards this became much easier. The latest version of HTML and CSS has taken the responsibility for the design away from the developers and placed them in the capable hands of the designers. And that’s where that responsibility belongs, after all, I don’t want a designer to muck around in my c# code just as much as he or she doesn’t want me to poke in the sites style definitions. This change in responsibilities resulted in good looking but more important: better thought out user interfaces in websites. And when websites became more and more interactive, people started to expect the same sort of look and feel from their desktop applications. But that didn’t really happen. Most business applications still have that battleship gray look and feel. Ok, they may use a different color but we’re not talking colors here but usability. Now, you may not be able to read the Dutch captions, but even if you did you wouldn’t understand what was going on. At least, not when you first see it. You have to scan the screen, read all the labels, see how they are related to the other elements on the screen and then figure out what they do. If you’re an experienced user of this application however, this might be a good thing: you know what to do and you get all the information you need in one single screen. But for most applications this isn’t the case. A lot of people only use their computer for a limited time a day (a weird concept for me, but it happens) and need it to get something done and then get on with their lives. For them, a user interface experience like the above isn’t working. (disclaimer: I just picked a screenshot, I am not saying this is bad software but it is an example of about 95% of the Windows applications out there). For the knowledge worker, this isn’t a problem. They use one or two systems and they know exactly what they need to do to achieve their goal. They don’t want any clutter on their screen that distracts them from their task, they just want to be as efficient as possible. When they know the systems they are very productive. The point is, how long does it take to become productive? And: could they be even more productive if the UX was better? Are there things missing that they don’t know about? Are there better ways to achieve what they want to achieve? Also: could a system be designed in such a way that it is not only much more easy to work with but also less tiring? in the example above you need to switch between the keyboard and mouse a lot, something that we now know can be very tiring. The goal of most applications (being client apps or websites on any kind of device) is to provide information. Information is data that when given to the right people, on the right time, in the right place and when it is correct adds value for that person (please, remember that definition: I still hear the statement “the information was wrong” which doesn’t make sense: data can be wrong, information cannot be). So if a system provides data, how can we make sure the chances of becoming information is as high as possible? A good example of a well thought-out system that attempts this is the Zune client. It is a very good application, and I think the UX is much better than it’s main competitor iTunes. Have a look at both: On the left you see the iTunes screenshot, on the right the Zune. As you notice, the Zune screen has more images but less chrome (chrome being visuals not part of the data you want to show, i.e. edges around buttons). The whole thing is text oriented or image oriented, where that text or image is part of the information you need. What is important is big, what’s less important is smaller. Yet, everything you need to know at that point is present and your attention is drawn immediately to what you’re trying to achieve: information about music. You can easily switch between the content on your machine and content on your Zune player but clicking on the image of the player. But if you didn’t know that, you’d find out soon enough: the whole UX is designed in such a way that it invites you to play around. So sooner or later (probably sooner) you’d click on that image and you would see what it does. In the iTunes version it’s harder to find: the discoverability is a lot lower. For inexperienced people the Zune player feels much more natural than the iTunes player, and they get up to speed a lot faster. How does this all work? Why is this UX better? The answer lies in a project from Microsoft with the codename (it seems to be becoming the official name though) “Metro”. Metro is a design language, based on certain principles. When they thought about UX they took a good long look around them and went out in search of metaphors. And they found them. The team noticed that signage in streets, airports, roads, buildings and so on are usually very clear and very precise. These signs give you the information you need and nothing more. It’s simple, clearly understood and fast to understand. A good example are airport signs. Airports can be intimidating places, especially for the non-experienced traveler. In the early 1990’s Amsterdam Airport Schiphol decided to redesign all the signage to make the traveller feel less disoriented. They developed a set of guidelines for signs and implemented those. Soon, most airports around the world adopted these ideas and you see variations of the Dutch signs everywhere on the globe. The signs are text-oriented. Yes, there are icons explaining what it all means for the people who can’t read or don’t understand the language, but the basic sign language is text. It’s clear, it’s high-contrast and it’s easy to understand. One look at the sign and you know where to go. The only thing I don’t like is the green sign pointing to the emergency exit, but since this is the default style for emergency exits I understand why they did this. If you look at the Zune UI again, you’ll notice the similarities. Text oriented, little or no icons, clear usage of fonts and all the information you need. This design language has a set of principles: Clean, light, open and fast Content, not chrome Soulful and alive These are just a couple of the principles, you can read the whole philosophy behind Metro for Windows Phone 7 here. These ideas seem to work. I love my Windows Phone 7. It’s easy to use, it’s clear, there’s no clutter that I do not need. It works for me. And I noticed it works for a lot of other people as well, especially people who aren’t as proficient with computers as I am. You see these ideas in a lot other places. Corning, a manufacturer of glass, has made a video of possible usages of their products. It’s their glimpse into the future. You’ll notice that a lot of the UI in the screens look a lot like what Microsoft is doing with Metro (not coincidentally Corning is the supplier for the Gorilla glass display surface on the new SUR40 device (or Surface v2.0 as a lot of people call it)). The idea behind this vision is that data should be available everywhere where you it. Systems should be available at all times and data is presented in a clear and light manner so that you can turn that data into information. You don’t need a lot of fancy animations that only distract from the data. You want the data and you want it fast. Have a look at this truly inspiring video that made: This is what I believe the future will look like. Of course, not everything is possible, or even desirable. But it is a nice way to think about the future . I feel very strongly about designing applications in such a way that they add value to the user. Designing applications that turn data into information. Applications that make the user feel happy to use them. So… when are you going to drop the battleship-gray designs? Tags van Technorati: surface,design,windows phone 7,wp7,metro

    Read the article

  • Render To Texture Using OpenGL is not working but normal rendering works just fine

    - by Franky Rivera
    things I initialize at the beginning of the program I realize not all of these pertain to my issue I just copy and pasted what I had //overall initialized //things openGL related I initialize earlier on in the project glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClearDepth( 1.0f ); glEnable(GL_ALPHA_TEST); glEnable( GL_STENCIL_TEST ); glEnable(GL_DEPTH_TEST); glDepthFunc( GL_LEQUAL ); glEnable(GL_CULL_FACE); glFrontFace( GL_CCW ); glEnable(GL_COLOR_MATERIAL); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST ); //we also initialize our shader programs //(i added some shader program functions for definitions) //this enum list is else where in code //i figured it would help show you guys more about my //shader compile creation function right under this enum list VVVVVV /*enum eSHADER_ATTRIB_LOCATION { VERTEX_ATTRIB = 0, NORMAL_ATTRIB = 2, COLOR_ATTRIB, COLOR2_ATTRIB, FOG_COORD, TEXTURE_COORD_ATTRIB0 = 8, TEXTURE_COORD_ATTRIB1, TEXTURE_COORD_ATTRIB2, TEXTURE_COORD_ATTRIB3, TEXTURE_COORD_ATTRIB4, TEXTURE_COORD_ATTRIB5, TEXTURE_COORD_ATTRIB6, TEXTURE_COORD_ATTRIB7 }; */ //if we fail making our shader leave if( !testShader.CreateShader( "SimpleShader.vp", "SimpleShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; if( !testScreenShader.CreateShader( "ScreenShader.vp", "ScreenShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; SHADER PROGRAM FUNCTIONS bool CShaderProgram::CreateShader( const char* szVertexShaderName, const char* szFragmentShaderName, ... ) { //here are our handles for the openGL shaders int iGLVertexShaderHandle = -1, iGLFragmentShaderHandle = -1; //get our shader data char *vData = 0, *fData = 0; int vLength = 0, fLength = 0; LoadShaderFile( szVertexShaderName, &vData, &vLength ); LoadShaderFile( szFragmentShaderName, &fData, &fLength ); //data if( !vData ) return false; //data if( !fData ) { delete[] vData; return false; } //create both our shader objects iGLVertexShaderHandle = glCreateShader( GL_VERTEX_SHADER ); iGLFragmentShaderHandle = glCreateShader( GL_FRAGMENT_SHADER ); //well we got this far so we have dynamic data to clean up //load vertex shader glShaderSource( iGLVertexShaderHandle, 1, (const char**)(&vData), &vLength ); //load fragment shader glShaderSource( iGLFragmentShaderHandle, 1, (const char**)(&fData), &fLength ); //we are done with our data delete it delete[] vData; delete[] fData; //compile them both glCompileShader( iGLVertexShaderHandle ); //get shader status int iShaderOk; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLVertexShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLVertexShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szVertexShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLVertexShaderHandle); return false; } glCompileShader( iGLFragmentShaderHandle ); //get shader status glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //lets check to see if the fragment shader compiled int iCompiled = 0; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { //this shader did not compile leave return false; } //lets check to see if the fragment shader compiled glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //make our new shader program m_iShaderProgramHandle = glCreateProgram(); glAttachShader( m_iShaderProgramHandle, iGLVertexShaderHandle ); glAttachShader( m_iShaderProgramHandle, iGLFragmentShaderHandle ); glLinkProgram( m_iShaderProgramHandle ); int iLinked = 0; glGetProgramiv( m_iShaderProgramHandle, GL_LINK_STATUS, &iLinked ); if( !iLinked ) { //we didn't link return false; } //NOW LETS CREATE ALL OUR HANDLES TO OUR PROPER LIKING //start from this parameter va_list parseList; va_start( parseList, szFragmentShaderName ); //read in number of variables if any unsigned uiNum = 0; uiNum = va_arg( parseList, unsigned ); //for loop through our attribute pairs int enumType = 0; for( unsigned x = 0; x < uiNum; ++x ) { //specify our attribute locations enumType = va_arg( parseList, int ); char* name = va_arg( parseList, char* ); glBindAttribLocation( m_iShaderProgramHandle, enumType, name ); } //end our list parsing va_end( parseList ); //relink specify //we have custom specified our attribute locations glLinkProgram( m_iShaderProgramHandle ); //fill our handles InitializeHandles( ); //everything went great return true; } void CShaderProgram::InitializeHandles( void ) { m_uihMVP = glGetUniformLocation( m_iShaderProgramHandle, "mMVP" ); m_uihWorld = glGetUniformLocation( m_iShaderProgramHandle, "mWorld" ); m_uihView = glGetUniformLocation( m_iShaderProgramHandle, "mView" ); m_uihProjection = glGetUniformLocation( m_iShaderProgramHandle, "mProjection" ); ///////////////////////////////////////////////////////////////////////////////// //texture handles m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" ); if( m_uihDiffuseMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihDiffuseMap, RM_DIFFUSE+GL_TEXTURE0 ); (0)+ } m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" ); if( m_uihNormalMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihNormalMap, RM_NORMAL+GL_TEXTURE0 ); (1)+ } } void CShaderProgram::SetDiffuseMap( const unsigned& uihDiffuseMap ) { (0)+ glActiveTexture( RM_DIFFUSE+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihDiffuseMap ); } void CShaderProgram::SetNormalMap( const unsigned& uihNormalMap ) { (1)+ glActiveTexture( RM_NORMAL+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihNormalMap ); } //MY 2 TEST SHADERS also my math order is correct it pertains to my matrix ordering in my math library once again i've tested the basic rendering. rendering to the screen works fine ----------------------------------------SIMPLE SHADER------------------------------------- //vertex shader looks like this #version 330 in vec3 vVertexPos; in vec3 vNormal; in vec2 vTexCoord; uniform mat4 mWorld; // Model Matrix uniform mat4 mView; // Camera View Matrix uniform mat4 mProjection;// Camera Projection Matrix out vec2 vTexCoordVary; // Texture coord to the fragment program out vec3 vNormalColor; void main( void ) { //pass the texture coordinate vTexCoordVary = vTexCoord; vNormalColor = vNormal; //calculate our model view projection matrix mat4 mMVP = (( mWorld * mView ) * mProjection ); //result our position gl_Position = vec4( vVertexPos, 1 ) * mMVP; } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; in vec3 vNormalColor; uniform sampler2D diffuseMap; uniform sampler2D normalMap; out vec4 fragColor[2]; void main( void ) { //CORRECT fragColor[0] = texture( normalMap, vTexCoordVary ); fragColor[1] = vec4( vNormalColor, 1.0 ); }; ----------------------------------------SCREEN SHADER------------------------------------- //vertext shader looks like this #version 330 in vec3 vVertexPos; // This is the position of the vertex coming in in vec2 vTexCoord; // This is the texture coordinate.... out vec2 vTexCoordVary; // Texture coord to the fragment program void main( void ) { vTexCoordVary = vTexCoord; //set our position gl_Position = vec4( vVertexPos.xyz, 1.0f ); } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; // Incoming "varying" texture coordinate uniform sampler2D diffuseMap;//the tile detail texture uniform sampler2D normalMap; //the normal map from earlier out vec4 vTheColorOfThePixel; void main( void ) { //CORRECT vTheColorOfThePixel = texture( normalMap, vTexCoordVary ); }; .Class RenderTarget Main Functions //here is my render targets create function bool CRenderTarget::Create( const unsigned uiNumTextures, unsigned uiWidth, unsigned uiHeight, int iInternalFormat, bool bDepthWanted ) { if( uiNumTextures <= 0 ) return false; //generate our variables glGenFramebuffers(1, &m_uifboHandle); // Initialize FBO glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle); m_uiNumTextures = uiNumTextures; if( bDepthWanted ) m_uiNumTextures += 1; m_uiTextureHandle = new unsigned int[uiNumTextures]; glGenTextures( uiNumTextures, m_uiTextureHandle ); for( unsigned x = 0; x < uiNumTextures-1; ++x ) { glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]); // Reserve space for our 2D render target glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + x, GL_TEXTURE_2D, m_uiTextureHandle[x], 0); } //if we need one for depth testing if( bDepthWanted ) { glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0); glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0);*/ // Must attach texture to framebuffer. Has Stencil and depth glBindRenderbuffer(GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glRenderbufferStorage(GL_RENDERBUFFER, /*GL_DEPTH_STENCIL*/GL_DEPTH24_STENCIL8, TEXTURE_WIDTH, TEXTURE_HEIGHT ); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); } glBindFramebuffer(GL_FRAMEBUFFER, 0); //everything went fine return true; } void CRenderTarget::Bind( const int& iTargetAttachmentLoc, const unsigned& uiWhichTexture, const bool bBindFrameBuffer ) { if( bBindFrameBuffer ) glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle ); if( uiWhichTexture < m_uiNumTextures ) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iTargetAttachmentLoc, m_uiTextureHandle[uiWhichTexture], 0); } void CRenderTarget::UnBind( void ) { //default our binding glBindFramebuffer( GL_FRAMEBUFFER, 0 ); } //this is all in a test project so here's my straight forward rendering function for testing this render function does basic rendering steps keep in mind i have already tested my textures i have already tested my box thats being rendered all basic rendering works fine its just when i try to render to a texture then display it in a render surface that it does not work. Also I have tested my render surface it is bound exactly to the screen coordinate space void TestRenderSteps( void ) { //Clear the color and the depth glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //bind the shader program glUseProgram( testShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().GetBufferHandle() ); //2) how our stream will be split here ( 4 bytes position, ..ext ) CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //send the needed information into the shader testShader.SetWorldMatrix( boxPosition ); testShader.SetViewMatrix( Static_Camera.GetView( ) ); testShader.SetProjectionMatrix( Static_Camera.GetProjection( ) ); testShader.SetDiffuseMap( iTextureID ); testShader.SetNormalMap( iTextureID2 ); GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; glDrawBuffers(2, buffers); //bind to our render target //RM_DIFFUSE, RM_NORMAL are enums (0 && 1) renderTarget.Bind( RM_DIFFUSE, 1, true ); renderTarget.Bind( RM_NORMAL, 1, false); //false because buffer is already bound //i clear here just to clear the texture to make it a default value of white //by doing this i can see if what im rendering to my screen is just drawing to the screen //or if its my render target defaulted glClearColor( 1.0f, 1.0f, 1.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //i have this box object which i draw testBox.Draw(); //the draw call looks like this //my normal rendering works just fine so i know this draw is fine // glDrawElementsBaseVertex( m_sides[x].GetPrimitiveType(), // m_sides[x].GetPrimitiveCount() * 3, // GL_UNSIGNED_INT, // BUFFER_OFFSET(sizeof(unsigned int) * m_sides[x].GetStartIndex()), // m_sides[x].GetStartVertex( ) ); //we unbind the target back to default renderTarget.UnBind(); //i stop mapping my vertex format CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().UnMapVertexStride(); //i go back to default in using no shader program glUseProgram( 0 ); //now that everything is drawn to the textures //lets draw our screen surface and pass it our 2 filled out textures //NOW RENDER THE TEXTURES WE COLLECTED TO THE SCREEN QUAD //bind the shader program glUseProgram( testScreenShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionTexBuffer().GetBufferHandle() ); //2) how our stream will be split here CVertexBufferManager::GetInstance()->GetPositionTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //pass our 2 filled out textures (in the shader im just using the diffuse //i wanted to see if i was rendering anything before i started getting into other techniques testScreenShader.SetDiffuseMap( renderTarget.GetTextureHandle(0) ); //SetDiffuseMap definitions in shader program class testScreenShader.SetNormalMap( renderTarget.GetTextureHandle(1) ); //SetNormalMap definitions in shader program class //DO the draw call drawing our screen rectangle glDrawElementsBaseVertex( m_ScreenRect.GetPrimitiveType(), m_ScreenRect.GetPrimitiveCount() * 3, GL_UNSIGNED_INT, BUFFER_OFFSET(sizeof(unsigned int) * m_ScreenRect.GetStartIndex()), m_ScreenRect.GetStartVertex( ) );*/ //unbind our vertex mapping CVertexBufferManager::GetInstance()->GetPositionTexBuffer().UnMapVertexStride(); //default to no shader program glUseProgram( 0 ); } Last words: 1) I can render my box just fine 2) i can render my screen rect just fine 3) I cannot render my box into a texture then display it into my screen rect 4) This entire project is just a test project I made to test different rendering practices. So excuse any "ugly-ish" unclean code. This was made just on a fly run through when I was trying new test cases.

    Read the article

  • Run VISTA disk check without reboot

    - by Chau
    I want to perform a surface scan on my harddisks (S-ATA, P-ATA, USB and E-SATA) in windows VISTA. Is it possible to do this without scheduling the scan on next reboot? It takes a lot of time and I would like to be able to use the computer during the scan. I can accept that this might not be possible on the window partition disk, but I cannot see why it shouldn't be possible on other disks.

    Read the article

  • Overhead video camera to capture live video to a PC

    - by BrianLy
    Can anyone recommend a manufacturer or starting point for overhead video camera units that can be used to capture a Microsoft Surface application in use? I'd like to stream this from another PC but I'm not sure what the best camera to look at would be. It seems like some of the overhead security cameras would fit the requirements but I don't know if they will interface easily with a PC.

    Read the article

  • What is the maximum size limit for a touchpad?

    - by RCIX
    This is more of a hardware question, but i wanted to know: what's the maximum feasible size that a touchpad can be made? I am wondering because someone remarked to me the other day that the surface on an iPod Touch is basically a touchpad, so how big can they be?

    Read the article

  • Windows server administration over web browser

    - by Andras Sebestyen
    I wonder if there is a software which can control Windows server over a browser. I know it sounds is strange however I haven't seen any and as you can do scripting with *nix system I think it could be a good one. Functions that I am after: User management Printer install msi assign I know there are many programmes including a win server but I would like to do it only one surface. Has anyone come a cross such a thing like this?

    Read the article

  • How can I enable Javascript in Firefox, or what is stopping it from being enabled?

    - by Xavierjazz
    XP3 Firefox 3.6.9 The Superuser and other sites state that I don't have Javascript enabled. I also am unable to view certain videos on the web and they say that I must have Javascript enabled. I have searched the net and followed instructions as to enable this, but still no joy. Can anyone point me to a solution? Thanks, Regards, EDIT: If it is important, here is the site I am trying: http://www.thestar.com/news/torontog20summit/article/922039--siu-reopens-g20-case-after-photos-surface

    Read the article

  • How can I create an Computer Icon on Windows 8 Server Beta's Desktop?

    - by bernd_k
    For some bad reason Microsoft decided to remove all standard desktop icons besides the Recyle bin. It is easy to create shortcuts to each single disk using create shortcut method. But I would like to have the computer icon too. I guess there will be an increased demand to create useful shortcuts on Windows 8 Servers Desktop, because the metro surface 'on the backside of your screen' is just impractical.

    Read the article

  • Android camera AVD error 100

    - by Cristian Voina
    I am trying to learn how to make a simple app in android. As background information I have only programmed in C language, no OOP. Currently I am trying turn on the camera using the indications from Android Developer site but some minor changes: - no button for capturing image. - no new activity. What I am trying to do is jut preview the camera. I will post the Code that I am using, the manifest and the LogCat. Main Activity: package com.example.camera_display; import android.app.Activity; import android.hardware.Camera; import android.os.Bundle; import android.view.Menu; import android.widget.FrameLayout; import android.widget.TextView; public class MainActivity extends Activity { private Camera mcamera; private CameraPreview mCameraPreview; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mcamera = getCameraInstance(); mCameraPreview = new CameraPreview(this, mcamera); FrameLayout preview = (FrameLayout) findViewById(R.id.camera_preview); preview.addView(mCameraPreview); } public static Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); // attempt to get a Camera instance } catch (Exception e){ // Camera is not available (in use or does not exist) } return c; // returns null if camera is unavailable } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } } CameraPreview Class: package com.example.camera_display; import java.io.IOException; import android.content.Context; import android.hardware.Camera; import android.util.Log; import android.view.SurfaceHolder; import android.view.SurfaceView; /** A basic Camera preview class */ public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback { private static final String TAG = "MyActivity"; private SurfaceHolder mHolder; private Camera mCamera; public CameraPreview(Context context, Camera camera) { super(context); mCamera = camera; // Install a SurfaceHolder.Callback so we get notified when the // underlying surface is created and destroyed. mHolder = getHolder(); mHolder.addCallback(this); // deprecated setting, but required on Android versions prior to 3.0 mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(SurfaceHolder holder) { // The Surface has been created, now tell the camera where to draw the preview. try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { Log.d(TAG, "Error setting camera preview: " + e.getMessage()); } } public void surfaceDestroyed(SurfaceHolder holder) { // empty. Take care of releasing the Camera preview in your activity. } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // If your preview can change or rotate, take care of those events here. // Make sure to stop the preview before resizing or reformatting it. if (mHolder.getSurface() == null){ // preview surface does not exist return; } // stop preview before making changes try { mCamera.stopPreview(); } catch (Exception e){ // ignore: tried to stop a non-existent preview } // set preview size and make any resize, rotate or // reformatting changes here // start preview with new settings try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ Log.d(TAG, "Error starting camera preview: " + e.getMessage()); } } } Manifest: <uses-permission android:name="android.permission.CAMERA"/> <uses-feature android:name="android.hardware.camera"/> LOG CAT: 06-30 15:58:35.075: D/libEGL(1153): loaded /system/lib/egl/libEGL_emulation.so 06-30 15:58:35.147: D/(1153): HostConnection::get() New Host Connection established 0x2a156060, tid 1153 06-30 15:58:35.478: D/libEGL(1153): loaded /system/lib/egl/libGLESv1_CM_emulation.so 06-30 15:58:35.515: D/libEGL(1153): loaded /system/lib/egl/libGLESv2_emulation.so 06-30 15:58:36.334: W/EGL_emulation(1153): eglSurfaceAttrib not implemented 06-30 15:58:36.685: D/OpenGLRenderer(1153): Enabling debug mode 0 06-30 15:58:36.935: D/MyActivity(1153): Error starting camera preview: startPreview failed 06-30 15:58:36.965: I/Choreographer(1153): Skipped 125 frames! The application may be doing too much work on its main thread. 06-30 15:58:38.455: W/Camera(1153): Camera server died! 06-30 15:58:38.455: W/Camera(1153): ICamera died 06-30 15:58:38.476: E/Camera(1153): Error 100 So If anyone could tell me what I am doing wrong (and explain why it should be done different) that would be great :) Thanks!

    Read the article

  • JavaBeans and DSLs

    - by Aaron Digulla
    It's 2009 and we still all hold on the JavaBeans despite all their flaws, mostly because of the tooling support which we wrote in our own blood. But now we have method chaining and internal DSLs and some pressure to replace or extend JavaBeans with DSL classes. Has anyone an implementation that implements PropertyDescriptor for a DSL (where the getters and setter use the exact same name as the property) and a way to hook that into the Java RT so I don't need to create them all by myself?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >