Search Results

Search found 2333 results on 94 pages for 'mr pixel'.

Page 51/94 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How to convert Vector Layer coordinates into Map Latitude and Longitude in Openlayers.

    - by Jenny
    I'm pretty confused. I have a point: x= -12669114.702301 y= 5561132.6760608 That I got from drawing a square on a vector layer with the DrawFeature controller. The numbers seem...erm...awfull large, but they seem to work, because if I later draw a square with all the same points, it's in the same position, so I figure they have to be right. The problem is when I try to convert this point to latitude and longitude. I'm using: map.getLonLatFromPixel(pointToPixel(points[0])); Where points[0] is a geometry Point, and the pointToPixel function takes any point and turns it into a pixel (since the getLonLatFromPixel needs a pixel). It does this by simply taking the point's x, and making it the pixels x, and so on. The latitude and longitude I get is on the order of: lat: -54402718463.864 lng: -18771380.353223 This is very clearly wrong. I'm left really confused. I try projecting this object, using: .transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject()); But I don't really get it and am pretty sure I did it incorrectly, anyways. My code is here: http://pastie.org/909644 I'm sort of at a loss. The coordinates seem consistent, because I can reuse them to get the same result...but they seem way larger than any of the examples I'm seeing on the openLayers website...

    Read the article

  • PNG composition using GD and PHP

    - by Dominic
    I am trying to take a rectangular png and add depth using GD by duplicating the background and moving it down 1 pixel and right 1 pixel. I am trying to preserve a transparent background as well. I am having a bunch of trouble with preserving the transparency. Any help would be greatly appreciated. Thanks! $obj = imagecreatefrompng('rectangle.png'); $depth = 5; $obj_width = imagesx($obj); $obj_height = imagesy($obj); imagesavealpha($obj, true); for($i=1;$i<=$depth;$i++){ $layer = imagecreatefrompng('rectangle.png'); imagealphablending( $layer, false ); imagesavealpha($layer, true); $new_obj = imagecreatetruecolor($obj_width+$i,$obj_height+$i); $new_obj_width = imagesx($new_obj); $new_obj_height = imagesy($new_obj); imagealphablending( $new_obj, false ); imagesavealpha($new_obj, true); $trans_color = imagecolorallocatealpha($new_obj, 0, 0, 0, 127); imagefill($new_obj, 0, 0, $trans_color); imagecopyresampled($new_obj, $layer, $i, $i, 0, 0, $obj_width, $obj_height, $obj_width, $obj_height); //imagesavealpha($new_obj, true); //imagesavealpha($obj, true); } header ("Content-type: image/png"); imagepng($new_obj); imagedestroy($new_obj);

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • XAML PixelGrid to Prevent Blurry Text

    - by Bodekaer
    Hi, Just wanted to share a small Grid I created, which can help prevent blurry text etc. as it adjusts the margin of the Grid to ensure a pixel perfect position and size of the grid. Works great e.g. for inside StackPanels with auto height Labels/TextBlocks. Here is the code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Media; namespace Controls { class PixelGrid : Grid { protected override void OnRenderSizeChanged(SizeChangedInfo sizeInfo) { // POSITION Vector position = VisualTreeHelper.GetOffset(this); double targetX = Math.Round(position.X, MidpointRounding.ToEven); double targetY = Math.Round(position.Y, MidpointRounding.ToEven); double marginLeft = targetX - position.X; double marginTop = targetY - position.Y; // SIZE double targetHeight = Math.Round(sizeInfo.NewSize.Height, MidpointRounding.ToEven); double targetWidth = Math.Round(sizeInfo.NewSize.Width, MidpointRounding.ToEven); double marginBottom = targetHeight - sizeInfo.NewSize.Height; double marginRight = targetWidth - sizeInfo.NewSize.Width; // Adjust margin to ensure pixel width this.Margin = new Thickness(marginLeft, marginTop, marginRight, marginBottom); base.OnRenderSizeChanged(sizeInfo); } } }

    Read the article

  • HLSL - Combining textures

    - by b34r
    Hi All, I'm trying to combine two textures in HLSL - specifically, I want to take the alpha values from a base image, and the color data from an overlay image. My pixel shader for this looks like this: float4 PixelShaderFunction(VertexOut input) : COLOR0 { float4 baseColor = tex2D( BaseSampler, input.baseCoords.xy ).rgba; float4 overlayColor = tex2D( OverlaySampler, input.overlayCoords.xy ).rgba; float4 color; color.r = overlayColor.r; color.g = overlayColor.g; color.b = overlayColor.b; color.a = baseColor.a; return color.rgba; } and my blend state looks like this: BlendState bs = new BlendState(); bs.AlphaSourceBlend = Blend.SourceAlpha; bs.AlphaDestinationBlend = Blend.DestinationAlpha; bs.ColorSourceBlend = Blend.SourceColor; bs.ColorDestinationBlend = Blend.DestinationColor; What this leaves me with is a washed out version of what should be the overlay color. I've tried numerous permutations of the BlendState settings, and played with the pixel shader math quite a bit, but to no avail. Can anyone point me in the right direction? Thanks in advance =)

    Read the article

  • Locking a GDI+ Bitmap in Native C++?

    - by user146780
    I can find many examples on how to do this in managed c++ but none for unmanaged. I want to get all the pixel data as efficiently as possible, but some of the scan0 stuff I would need more info about so I can properly iterate through the pixel data and get each rgba value from it. right now I have this: Bitmap *b = new Bitmap(filename); if(b == NULL) { return 0; } UINT w,h; w = b->GetWidth(); h = b->GetHeight(); Rect *r = new Rect(0,0,w,h); BitmapData *lockdat; b->LockBits(r,ImageLockModeRead,PixelFormatDontCare,lockdat); delete(r); if(w == 0 && h == 0) { return 0; } Color c; std::vector<GLubyte> pdata(w * h * 4,0.0); for (unsigned int i = 0; i < h; i++) { for (unsigned int j = 0; j < w; j++) { b->GetPixel(j,i,&c); pdata[i * 4 * w + j * 4 + 0] = (GLubyte) c.GetR(); pdata[i * 4 * w + j * 4 + 1] = (GLubyte) c.GetG(); pdata[i * 4 * w + j * 4 + 2] = (GLubyte) c.GetB(); pdata[i * 4 * w + j * 4 + 3] = (GLubyte) c.GetA(); } } delete(b); return CreateTexture(pdata,w,h); How do I use lockdat to do the equivalent of getpixel? Thanks

    Read the article

  • Overlay an image over video using OpenGL ES shaders

    - by BlueVoodoo
    I am trying to understand the basic concepts of OpenGL. A week into it, I am still far from there. Once I am in glsl, I know what to do but I find getting there is the tricky bit. I am currently able to pass in video pixels which I manipulate and present. I have then been trying to add still image as an overlay. This is where I get lost. My end goal is to end up in the same fragment shader with pixel data from both my video and my still image. I imagine this means I need two textures and pass on two pixel buffers. I am currently passing the video pixels like this: glGenTextures(1, &textures[0]); //target, texture glBindTexture(GL_TEXTURE_2D, textures[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, buffer); Would I then repeat this process on textures[1] with the second buffer from the image? If so, do I then bind both GL_TEXTURE0 and GL_TEXTURE1? ...and would my shader look something like this? uniform sampler2D videoData; uniform sampler2D imageData; once I am in the shader? It seems no matter what combination I try, image and video always ends up being just video data in both these. Sorry for the many questions merged in here, just want to clear my many assumptions and move on. To clarify the question a bit, what do I need to do to add pixels from a still image in the process described? ("easy to understand" sample code or any types of hints would be appreciated).

    Read the article

  • How to calculate the y-pixels of someones weight on a graph? (math+programming question)

    - by RexOnRoids
    I'm not that smart like some of you geniuses. I need some help from a math whiz. My app draws a graph of the users weight over time. I need a surefire way to always get the right pixel position to draw the weight point at for a given weight. For example, say I want to plot the weight 80.0(kg) on the graph when the range of weights is 80.0 to 40.0kg. I want to be able to plug in the weight (given I know the highest and lowest weights in the range also) and get the pixel result 400(y) (for the top of the graph). The graph is 300 pixels high (starts at 100 and ends at 400). The highest weight 80kg would be plot at 400 while the lowest weight 40kg would be plot at 100. And the intermediate weights should be plotted appropriately. I tried this but it does not work: -(float)weightToPixel:(float)theWeight { float graphMaxY = 400; //The TOP of the graph float graphMinY = 100; //The BOTTOM of the graph float yOffset = 100; //Graph itself is offset 100 pixels in the Y direction float coordDiff = graphMaxY-graphMinY; //The size in pixels of the graph float weightDiff = self.highestWeight-self.lowestWeight; //The weight gap float pixelIncrement = coordDiff/weightDiff; float weightY = (theWeight*pixelIncrement)-(coordDiff-yOffset); //The return value return weightYpixel; }

    Read the article

  • OpenGL GL_LINES enpoints not joining

    - by old-school rules
    I'm having problems with the GL_LINES block... the lines in the sample below do not connect on the ends (although sometimes it randomly decides to connect a corner or two). Instead, the endpoints come within 1 pixel of one another (leaving a corner that is not fully squared; if that makes sense). It is a simple block to draw a solid 1-pixel rectangle. glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top, 0); glEnd(); The sample below seems to correct the problem, giving me sharp, square corners; but I can't accept it because I don't know why it's acting this way... glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right + 1, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom + 1, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left - 1, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top - 1, 0); glEnd(); Any OpenGL programmers out there that can help, I would appreciate it :)

    Read the article

  • Generating a twitter OAuth access key - the semi-manual way

    - by Piet
    [UPDATE] Apparently someone at Twitter was listening, or I’m going senile/blind. Let’s call it a combination of both. Instead of following all the steps below, you could just login with the Twitter account you want to use on http://dev.twitter.com, register your application and then click ‘Edit Details’ on the application overview page at http://dev.twitter.com/apps. Next click the ‘Application detail’ button on the right, followed by the ‘My Access Token’ button in order to get your Access Token and Access Token Secret. This makes the old post below rather obsolete. Clearly a case of me thinking everything is a nail and ruby is a hammer (don’t they usually say this about java coders?) [ORIGINAL POST] OAuth is great! OAuth allows your application to use your user’s data without the need to ask for their password. So Twitter made the API much safer for their and your users. Hurray! Free pizza for everyone! Unless of course you’re using the Twitter API for your own needs like running your own bot and don’t need access to other user’s data. In such cases a simple username/password combination is more than enough. I can understand however that the Twitter guys don’t really care that much about these exceptions(?). Most such uses for the API are probably rather spammy in nature. !!! If you have a twitter app that uses the API to access external user’s data: look for another solution. This solution is ONLY meant when you ONLY need access to your own account(s) through the API. Other Solutions Mr Dallas Devries posted a solution here which involves requesting and scraping a one-time PIN. But: I like to minimize the amount of calls I make to twitter’s API or pages to lessen my chances of meeting the fail whale. Also, as soon as the pin isn’t included in a div called ‘oauth_pin’ anymore, this will fail. However, mr Devries’ post was a starting point for my solution, so I’m much obliged to him posting his findings. Authenticating with the Twitter API: old vs new Acessing The Twitter API the old way: require ‘twitter’ httpauth = Twitter::HTTPAuth.new('my_account','my_secret_password') client = Twitter::Base.new(httpauth) client.update(‘Hurray!’) The OAuth way: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Hurray!’) In the above case, ve4whatafuzzksaMQKjoI is the ‘consumer key’ (sometimes also referred to as ‘consumer token’) and KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY is the ‘consumer secret’. You’ll get these from Twitter when you register your app. 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the ‘access token’ and fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the ‘access secret’. This combination gives the registered application access to your account. I’ll show you how to obtain these by following the steps below. (Basically you’ll need a bunch of keys and you’ll have to jump a bit through hoops to obtain them for your server/bot. ) How to get these keys 1. Surf to the twitter apps registration page go to http://dev.twitter.com/apps to register your app. Login with your twitter account. 2. Register your application Enter something for Application name, Description, website,… as I said: they make you jump through hoops. If you plan on using the API to post tweets, Your application name and website will be used in the ‘5 minutes ago via…’ line below your tweet. You could use the this to point to a page with info about your bot, or maybe it’s useful for SEO purposes. For application type I choose ‘browser’ and entered http://www.hadermann.be/callback as a ‘Callback URL’. This url returns a 404 error, which is ideal because after giving our account access to our ‘application’ (step 6), it will redirect to this url with an ‘oauth_token’ and ‘oauth_verifier’ in the url. We need to get these from the url. It doesn’t really matter what you enter here though, you could leave it blank because you need to explicitely specify it when generating a request token. You probably want read&write access so set this at ‘Default Access type’. 3. Get your consumer key and consumer secret On the next page, copy/paste your ‘consumer key’ and ‘consumer secret’. You’ll need these later on. You also need these as part of the authentication in your script later on: oauth = Twitter::OAuth.new([consumer key], [consumer secret]) 4. Obtain your request token run the following in IRB to obtain your ‘request token’ Replace my fake consumer key and consumer secret with the one you obtained in step 3. And use something else instead http://www.hadermann.be/callback: although this will only give a 404, you shouldn’t trust me. irb(main):001:0> require 'oauth' irb(main):002:0> c = OAuth::Consumer.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY', {:site => 'http://twitter.com'}) irb(main):003:0> request_token = c.get_request_token(:oauth_callback => 'http://www.hadermann.be/callback') irb(main):004:0> request_token.token => "UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1" This (UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1) is the request token: Copy/paste this token, you will need this next. 5. Authorize your application surf to https://api.twitter.com/oauth/authorize?oauth_token=[the above token], for example: https://api.twitter.com/oauth/authorize?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1 This will bring you to the ‘An application would like to connect to your account’- screen on Twitter where you can grant access to the app you just registered. If you aren’t still logged in, you need to login first. Click ‘Allow’. Unless you don’t trust yourself. 6. Get your oauth_verifier from the redirected url Your browser will be redirected to your callback url, with an oauth_token and oauth_verifier parameter appended. You’ll need the oauth_verifier. In my case the browser redirected to: http://www.hadermann.be/callback?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1&oauth_verifier=waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag Which returned a 404, giving me the chance to copy/paste my oauth_verifier: waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag 7. Request an access token Back to irb, use the oauth_verifier to request an access token, as follows: irb(main):005:0> at = request_token.get_access_token(:oauth_verifier => 'waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag') irb(main):006:0> at.params[:oauth_token] => "123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis" irb(main):007:0> at.params[:oauth_token_secret] => "fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh" We’re there! 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the access token. fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the access secret. Try it! Try the following to post an update: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Cowabunga!’) Now you can go to your twitter page and delete the tweet if you want to.

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Silverlight for Windows Embedded Tutorial (step 5 and a bit of Windows Phone 7)

    - by Valter Minute
    If you haven’t spent the last week in the middle of the Sahara desert or traveling on a sled in the north pole area you should have heard something about the launch of Windows Phone 7 Series (or Windows Phone Series 7, or Windows Series Phone 7 or something like that). Even if you are in the middle of the desert or somewhere around the north pole you may have been reached by the news, since it seems that WP7S (using the full name will kill my available bandwidth!) is generating a lot of buzz in the development and IT communities. One of the most important aspects of this new platform is that it will be programmed using a new set of tools and frameworks, completely different from the ones used on older releases of Windows Mobile (or SmartPhone, or PocketPC or whatever…). WP7S applications can be developed using Silverlight or XNA. If you want to learn something more about WP7S development you can download the preview of Charles Petzold’s book about it: http://www.charlespetzold.com/phone/index.html Charles Petzold is also the author of “Programming Windows”, the first book I ever read about programming on Windows (it was Windows 3.0 at that time!). The fact that even I was able to learn how to develop Windows application is a proof of the quality of Petzold’s work. This book is up to his standards and the 150pages preview is already rich in technical contents without being boring or complicated to understand. I may be able to become a Windows Phone developer thanks to mr. Petzold. Mr. Petzold uses some nice samples to introduce the basic concepts of Silverlight development on WP7S. On this new platform you’ll use managed code to develop your application, so those samples can’t be ported on Windows CE R3 as they are, but I would like to take one of the first samples (called “SilverlightTapHello1”) and adapt it to Silverlight for Windows Embedded to show that even plain old native code can be used to develop “cool” user interfaces! The sample shows the standard WP7S title header and a textbox with an hello world message inside it. When the user touches the textbox, it will change its color. When the user touches the background (Grid) behind it, its default color (plain old White) will be restored. Let’s see how we can implement the same features on our embedded device! I took the XAML code of the sample (you can download the book samples here: http://download.microsoft.com/download/1/D/B/1DB49641-3956-41F1-BAFA-A021673C709E/CodeSamples_DRAFTPreview_ProgrammingWindowsPhone7Series.zip) and changed it a little bit to remove references to WP7S or managed runtime. If you compare the resulting files you will see that I was able to keep all the resources inside the App.xaml files and the structure of  MainPage.XAML almost intact. This is the Silverlight for Windows Embedded version of MainPage.XAML: <UserControl x:Class="SilverlightTapHello1.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:phoneNavigation="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Navigation" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="800" FontFamily="{StaticResource PhoneFontFamilyNormal}" FontSize="{StaticResource PhoneFontSizeNormal}" Foreground="{StaticResource PhoneForegroundBrush}" Width="640" Height="480">   <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions>   <!--TitleGrid is the name of the application and page title--> <Grid x:Name="TitleGrid" Grid.Row="0"> <TextBlock Text="SILVERLIGHT TAP HELLO #1" x:Name="textBlockPageTitle" Style="{StaticResource PhoneTextPageTitle1Style}"/> <TextBlock Text="main page" x:Name="textBlockListTitle" Style="{StaticResource PhoneTextPageTitle2Style}"/> </Grid>   <!--ContentGrid is empty. Place new content here--> <Grid x:Name="ContentGrid" Grid.Row="1" MouseLeftButtonDown="ContentGrid_MouseButtonDown" Background="{StaticResource PhoneBackgroundBrush}"> <TextBlock x:Name="TextBlock" Text="Hello, Silverlight for Windows Embedded!" HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </Grid> </UserControl> If you compare it to the WP7S sample (not reported here to avoid any copyright issue) you’ll notice that I had to replace the original phoneNavigation:PhoneApplicationPage with UserControl as the root node. This make sense because there is not support for phone applications on CE 6. I also had to specify width and height of my main page (on the WP7S device this will be adjusted by the OS) and I had to replace the multi-touch event handler with the MouseLeftButtonDown event (no multitouch support for Windows CE R3, still). I also changed the hello message, of course. I used XAML2CPP to generate the boring part of our application and then added the initialization code to WinMain: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1; XRXamlSource dictsrc;   dictsrc.SetResource(hInstance,TEXT("XAML"),IDR_XAML_App);   if (FAILED(retcode=app->LoadResourceDictionary(&dictsrc,NULL))) return -1;   MainPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return exitcode; }   You may have noticed that there is something different from the previous samples. I added the code to load a resource dictionary. Resources are an important feature of XAML that allows you to define some values that could be replaced inside any XAML file loaded by the runtime. You can use resources to define custom styles for your fonts, backgrounds, controls etc. and to support internationalization, by providing different strings for different languages. The rest of our WinMain isn’t that different. It creates an instances of our MainPage object and displays it. The MainPage class implements an event handler for the MouseLeftButtonDown event of the ContentGrid: class MainPage : public TMainPage<MainPage> { public:   HRESULT ContentGrid_MouseButtonDown(IXRDependencyObject* source,XRMouseButtonEventArgs* args) { HRESULT retcode; IXRSolidColorBrushPtr brush; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode;   if (FAILED(retcode=app->CreateObject(IID_IXRSolidColorBrush,&brush))) return retcode;   COLORREF color=RGBA(0xff,0xff,0xff,0xff);   if (args->pOriginalSource==TextBlock) color=RGBA(rand()&0xFF,rand()&0xFF,rand()&0xFF,0xFF);   if (FAILED(retcode=brush->SetColor(color))) return retcode;   if (FAILED(retcode=TextBlock->SetForeground(brush))) return retcode; return S_OK; } }; As you can see this event is generated when a used clicks inside the grid or inside one of the objects it contains. Since our TextBlock is inside the grid, we don’t need to provide an event handler for its MouseLeftButtonDown event. We can just use the pOriginalSource member of the event arguments to check if the event was generated inside the textblock. If the event was generated inside the grid we create a white brush,if it’s inside the textblock we create some randomly colored brush. Notice that we need to use the RGBA macro to create colors, specifying also a transparency value for them. If we use the RGB macro the resulting color will have its Alpha channel set to zero and will be transparent. Using the SetForeground method we can change the color of our control. You can compare this to the managed code that you can find at page 40-41 of Petzold’s preview book and you’ll see that the native version isn’t much more complex than the managed one. As usual you can download the full code of the sample here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/SilverlightTapHello1.zip And remember to pre-order Charles Petzold’s “Programming Windows Phone 7 series”, I bet it will be a best-seller! Technorati Tags: Silverlight for Windows Embedded,Windows CE

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • Kill overscan for ATI drivers?

    - by joeforker
    I have a dual-boot Windows 7 64 bit/Linux 64 bit machine that uses ATI's Catalyst drivers. Sometimes I attach it to a 1080p LCD TV over HDMI. ATI is daft enough to provide a border to account for overscan. I'm using an LCD TV. No overscan, or it looks like crap because the pixel mapping is not 1:1. How do I disable this driver "feature" in Windows? in Linux?

    Read the article

  • Insert PDF image in MS Word

    - by serhio
    Hello. I have a .doc witch I will convert in PDF. In this .doc I has an image. When I convert the doc to PDF and then zoom it, the images became ugly pixel-ized. I found a tool that converted my bitmap .png image to vectorial .PDF image. Now how could I import the PDF image in MS Word (that finally I will convert to PDF once again)?

    Read the article

  • Tiff not displaying correctly on Mac

    - by user348935
    Hi, I have a collection of .tif files but when I open them on my Mac 10.5 they show up as solid black and I don't know why. thanks Upon further inspection at really high brightness there are some out of focus objects viewable. It looks as if I am getting the first couple bits of each pixel but not the entire range of values.

    Read the article

  • Automater for Vista

    - by allindal
    Is a there a similar program for Vista like the MAC application Automator Specifically I'm looking for a vista app that can control timed clicks,example...in automator, I can specify which pixel and how often to click, or a series of click in different places.I'm not looking for an"intelligent clicker" just a purely GUI programed clicker. ALso i need it do work and record the keyboard. From reading other SU posts i can see that c prompt doesn't have an easy to do this.

    Read the article

  • Tiff not displaying correctly on Mac OS X

    - by user348935
    I have a collection of .tif files but when I open them on Mac OS X 10.5 they show up as solid black and I don't know why. thanks Upon further inspection at really high brightness there are some out of focus objects viewable. It looks as if I am getting the first couple bits of each pixel but not the entire range of values.

    Read the article

  • Gimp: Color to alpha

    - by MTilsted
    I have an image where I want all the pixels with a specific color converted to transparent pixels. The operation should not change the color/alpha value of any pixel which don't match the color exactly. How do I do that? At first I thought I could use Colors-"Color to Alpha" but that don't work because it changes the color of all pixels(It adds an alpha value to all pixels). Using Gimp 2.6.11 on Linux

    Read the article

  • Why is changing displays slow?

    - by Josh Bronson
    I've had many laptops over the course of many years, and while many things have sped up, one thing remains as slow today as it was years ago: (dis)connecting an external display. What's taking it so long to detect the new display and update the pixel buffers? I use Macs primarily, but I think this is equally slow on other platforms.

    Read the article

  • Nvidia Drivers on Debian / Lenny (Stable) -> Installation successful -> Monitors gets black

    - by David
    I have successfully installed the proprietary drivers for my nvidia (geforce 7300 gt) graphics card on debian/lenny. I know its not the best way to chose for driver installation ( see this link: http://wiki.debian.org/NvidiaGraphicsDrivers#non-freedrivers ). but the two ways seem to be possible for me (nvidia-kernel module compilation). Now the problem is that the monitors gets black, the power light starts blinking after i launch the x-server. Have a short look a the logs (output truncated from /var/log/Xorg.0.log): (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) Jul 28 17:10:11 NVIDIA(0): Enabling RENDER acceleration (II) Jul 28 17:10:11 NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) Jul 28 17:10:11 NVIDIA(0): enabled. (II) Jul 28 17:10:11 NVIDIA(0): NVIDIA GPU GeForce 7300 GT (G73) at PCI:1:0:0 (GPU-0) (--) Jul 28 17:10:11 NVIDIA(0): Memory: 262144 kBytes (--) Jul 28 17:10:11 NVIDIA(0): VideoBIOS: 05.73.22.25.00 (II) Jul 28 17:10:11 NVIDIA(0): Detected PCI Express Link width: 16X (--) Jul 28 17:10:11 NVIDIA(0): Interlaced video modes are supported on this GPU (--) Jul 28 17:10:11 NVIDIA(0): Connected display device(s) on GeForce 7300 GT at PCI:1:0:0: (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0): 400.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): 165.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): Internal Single Link TMDS (II) Jul 28 17:10:11 NVIDIA(0): Assigned Display Device: CRT-0 (==) Jul 28 17:10:11 NVIDIA(0): (==) Jul 28 17:10:11 NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" (==) Jul 28 17:10:11 NVIDIA(0): will be used as the requested mode. (==) Jul 28 17:10:11 NVIDIA(0): (II) Jul 28 17:10:11 NVIDIA(0): Validated modes: (II) Jul 28 17:10:11 NVIDIA(0): "nvidia-auto-select" (II) Jul 28 17:10:11 NVIDIA(0): Virtual screen size determined to be 1280 x 1024 (--) Jul 28 17:10:11 NVIDIA(0): DPI set to (85, 86); computed from "UseEdidDpi" X config (--) Jul 28 17:10:11 NVIDIA(0): option (==) Jul 28 17:10:11 NVIDIA(0): Enabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp Here is the complete /etc/X11/xorg.conf file as generated by nvidia-xconfig: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 256.35 (buildmeister@builder101) Wed Jun 16 19:25:59 PDT 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Hor

    Read the article

  • Upgrading PS1 Light Gun [on hold]

    - by Nathan Taylor
    Is There any possible way to upgrade the retro G-con Light Gun for PS1 to allow it to interact with HD TV's? I am aware that they were Designed purely for Tube TV's but I would be happy to know of any hardware that would maybe convert the light to hit the Pixels on an LCD TV. If not is there any other Light gun that would work on PS1 games but has the newer light gun hardware that can interact with a higher Pixel LCD TV?

    Read the article

  • Changing the size of the Windows 7 taskbar

    - by dertoni
    Is there a way to change the size of the Windows 7 Taskbar? Internal or with the help of outside programs, both welcome. Something like the MacOS X Doc Zooming effect would be OK/nice, too. Edit: I'm essentially looking for a way to shrink it, because my laptop does not have a big screen so every pixel is valueable.

    Read the article

  • Virtual PC on Windows 7 doesn't have adjustment for video card size?

    - by Jian Lin
    The current VirtualBox has a place where the video card size can be set by the user. It seems that Win 7's Virtual PC doesn't have one? Will it auto adjust -- but what if the screen size is 800 x 600 and the user resize it to 1600 x 1200, then the original video size may not be enough and will that cause any issue? I do sometimes see blinking random pixel region showing on the VPC's screen... maybe it is cause by not enough video RAM size?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >