Search Results

Search found 189 results on 8 pages for 'zed 0xff'.

Page 2/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • SDL beginner: create Rectangle surface filled with color

    - by user3689
    im learning SDL , i like to create Rectangle surface with color that is not image . here is my code that compiles fine but dosnt work : im passing the function this params: SDL_Surface* m_screen = SDL_SetVideoMode(SCREEN_WIDTH,SCREEN_HEIGHT,SCREEN_BPP,SDL_SWSURFACE); SDL_FillRect(m_screen,&m_screen->clip_rect,SDL_MapRGB(m_screen->format,0xFF, 0xFF, 0xFF)); ... Button button(m_screen,0,0,50,50,255,0,0) ... ... Button::Button(SDL_Surface* screen,int x,int y,int w,int h,int R, int G, int B) { SDL_Rect box; SDL_Surface * ButtonSurface; ButtonSurface = NULL ; Uint32 rmask, gmask, bmask, amask; #if SDL_BYTEORDER == SDL_BIG_ENDIAN rmask = 0xff000000; gmask = 0x00ff0000; bmask = 0x0000ff00; amask = 0x000000ff; #else rmask = 0x000000ff; gmask = 0x0000ff00; bmask = 0x00ff0000; amask = 0xff000000; #endif box.x = x; box.y = y; box.w = w; box.h = h; ButtonSurface = SDL_CreateRGBSurface(SDL_SWSURFACE, box.w,box.h, 32, rmask, gmask, bmask, amask); if(ButtonSurface == NULL) { LOG_MSG("Button::Button Button failed"); } SDL_FillRect(screen,&box,SDL_MapRGB ( ButtonSurface->format, R, G, B )); //ut.ApplySurface(0,0,ButtonSurface,screen); SDL_BlitSurface(ButtonSurface,NULL,screen,&box); } what im doing here wrong ?

    Read the article

  • How can I create a rectangular SDL surface filled with a particular color?

    - by user3689
    I'm learning SDL. I'd like to create rectangular surface filled with a flat color (not an image). Below is my code -- it compiles fine, but doesn't work. I'm passing the function these parameters: SDL_Surface* m_screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_BPP, SDL_SWSURFACE ); SDL_FillRect( m_screen, &m_screen->clip_rect, SDL_MapRGB(m_screen->format,0xFF, 0xFF, 0xFF) ); ... Button button(m_screen,0,0,50,50,255,0,0) ... ... Button::Button(SDL_Surface* screen,int x,int y,int w,int h,int R, int G, int B) { SDL_Rect box; SDL_Surface * ButtonSurface; ButtonSurface = NULL ; Uint32 rmask, gmask, bmask, amask; #if SDL_BYTEORDER == SDL_BIG_ENDIAN rmask = 0xff000000; gmask = 0x00ff0000; bmask = 0x0000ff00; amask = 0x000000ff; #else rmask = 0x000000ff; gmask = 0x0000ff00; bmask = 0x00ff0000; amask = 0xff000000; #endif box.x = x; box.y = y; box.w = w; box.h = h; ButtonSurface = SDL_CreateRGBSurface(SDL_SWSURFACE, box.w,box.h, 32, rmask, gmask, bmask, amask); if(ButtonSurface == NULL) { LOG_MSG("Button::Button Button failed"); } SDL_FillRect(screen,&box,SDL_MapRGB ( ButtonSurface->format, R, G, B )); //ut.ApplySurface(0,0,ButtonSurface,screen); SDL_BlitSurface(ButtonSurface,NULL,screen,&box); } What am I doing wrong here?

    Read the article

  • How to detect the character encoding of a text file?

    - by Cédric Boivin
    I try to detect which character encoding is used in my file. I try with this code to get the standard encoding public static Encoding GetFileEncoding(string srcFile) { // *** Use Default of Encoding.Default (Ansi CodePage) Encoding enc = Encoding.Default; // *** Detect byte order mark if any - otherwise assume default byte[] buffer = new byte[5]; FileStream file = new FileStream(srcFile, FileMode.Open); file.Read(buffer, 0, 5); file.Close(); if (buffer[0] == 0xef && buffer[1] == 0xbb && buffer[2] == 0xbf) enc = Encoding.UTF8; else if (buffer[0] == 0xfe && buffer[1] == 0xff) enc = Encoding.Unicode; else if (buffer[0] == 0 && buffer[1] == 0 && buffer[2] == 0xfe && buffer[3] == 0xff) enc = Encoding.UTF32; else if (buffer[0] == 0x2b && buffer[1] == 0x2f && buffer[2] == 0x76) enc = Encoding.UTF7; else if (buffer[0] == 0xFE && buffer[1] == 0xFF) // 1201 unicodeFFFE Unicode (Big-Endian) enc = Encoding.GetEncoding(1201); else if (buffer[0] == 0xFF && buffer[1] == 0xFE) // 1200 utf-16 Unicode enc = Encoding.GetEncoding(1200); return enc; } My five first byte are 60, 118, 56, 46 and 49. Is there a chart that shows which encoding matches those five first bytes?

    Read the article

  • Simplifying loop in Objective-C

    - by Joe Habadas
    I have this enormous loop in my code (not by choice), because I can't seem to make it work any other way. If there's some way make this simple as opposed to me repeating it +20 times that would be great, thanks. for (NSUInteger i = 0; i < 20; i++) { if (a[0] == 0xFF || b[i] == a[0]) { c[0] = b[i]; if (d[0] == 0xFF) { d[0] = c[0]; } ... below repeats +18 more times with [i+2,3,4,etc] ... if (a[1] == 0xFF || b[i + 1] == a[1]) { c[1] = b[i + 1]; if (d[1] == 0xFF) { d[1] = c[1]; } ... when it reaches the last one it calls a method ... [self doSomething]; continue; i += 19; ... then } repeats +19 times (to close things)... } } } I've tried almost every possible combo of things that I know of attempting to make this smaller and efficient. Take a look at my flow chart — pretty huh? i'm not a madman, honest.

    Read the article

  • Haskell optimization of a function looking for a bytestring terminator

    - by me2
    Profiling of some code showed that about 65% of the time I was inside the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • Haskell optimization of the following function

    - by me2
    Profiling of some code of mine showed that about 65% of the time I was running the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this code? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • How can i convert a string into byte[] of unsigned int 32 C#

    - by Miroo
    i have string like "0x5D, 0x50, 0x68, 0xBE, 0xC9, 0xB3, 0x84, 0xFF" i wanna convert it into byte[] key= new byte[] { 0x5D, 0x50, 0x68, 0xBE, 0xC9, 0xB3, 0x84, 0xFF}; i thought about splitting the string by ',' then loop on it and setvalue into another byte[] in index of i string Key = "0x5D, 0x50, 0x68, 0xBE, 0xC9, 0xB3, 0x84, 0xFF"; string[] arr = Key.Split(','); byte[] keybyte= new byte[8]; for (int i = 0; i < arr.Length; i++) { keybyte.SetValue(Int32.Parse(arr[i].ToString()), i); } but seems like it doesn't work i get error in converting the string into unsigned int32 on the first beginning an help would be appreciated

    Read the article

  • Search in big text log files

    - by 0xFF
    Hi, let's say you have an game server which creating text log files of gamers actions, and from time to time you need to lookup something in those logs files (like investigating an scam or loosing an item). Just for example you have 100 files and each file have size between 20MB and 50MB - How you would search them quickly? What I have already tried to do is create several threads and each invidual thread will map his own file to memory (let say memory should not be problem if it not exceed 500MB of ram) perform search here, result was something around 1 second per file : File:a26.log - read in: 0.891, lines: 625282, matches: 78848 Is there better way how to do that ? - because it seems to me kinda slow. thanks. (java was used for this case)

    Read the article

  • Using an int as the numerical representation of a string in C#

    - by bluewall21
    I'm trying to use an integer as the numerical representation of a string, for example, storing "ABCD" as 0x41424344. However, when it comes to output, I've got to convert the integer back into 4 ASCII characters. Right now, I'm using bit shifts and masking, as follows: int value = 0x41424344; string s = new string ( new char [] { (char)(value >> 24), (char)(value >> 16 & 0xFF), (char)(value >> 8 & 0xFF), (char)(value & 0xFF) }); Is there a cleaner way to do this? I've tried various casts, but the compiler, as expected, complained about it.

    Read the article

  • How can I give a color to imagecolorallocate?

    - by Roman
    I have a PHP variable which contains information about color. For example $text_color = "ff90f3". Now I want to give this color to imagecolorallocate. The imagecolorallocate works like that: imagecolorallocate($im, 0xFF, 0xFF, 0xFF); So, I am trying to do the following: $r_bg = bin2hex("0x".substr($text_color,0,2)); $g_bg = bin2hex("0x".substr($text_color,2,2)); $b_bg = bin2hex("0x".substr($text_color,4,2)); $bg_col = imagecolorallocate($image, $r_bg, $g_bg, $b_bg); It does not work. Why? I try it also without bin2hex, it also did not work. Can anybody help me with that?

    Read the article

  • How does Array.ForEach() compare to standard for loop in C#?

    - by DaveN59
    I pine for the days when, as a C programmer, I could type: memset( byte_array, '0xFF' ); and get a byte array filled with 'FF' characters. So, I have been looking for a replacement for this: for (int i=0; i < byteArray.Length; i++) { byteArray[i] = 0xFF; } Lately, I have been using some of the new C# features and have been using this approach instead: Array.ForEach<byte>(byteArray, b => b = 0xFF); Granted, the second approach seems cleaner and is easier on the eye, but how does the performance compare to using the first approach? Am I introducing needless overhead by using Linq and generics? Thanks, Dave

    Read the article

  • VMWare Converter - Remote Linux Install P2V

    - by Zed Said
    I am trying to get what I thought was an easy process working. Here is my situation. I have a remote linux server (running Debian) that I would like to turn into a VM so that I can do testing on it rather than the production server. I downloaded VMWare Converter. I went to the "convert machine" wizard, chose Powered-on machine, entered my remote machine details. Clicking next shows that it correctly connects to my remote linux box. Now, on the next screen, for Destination type, it enters "WMWare Infrastructure virtual machine", and I can't change this. This is where I am stuck. Why do I need another sever to convert with? Is this where my VM gets sent to? And why can't I just convert the VM and save it on the computer that I am running the converter program on? If I do have to use a vmware infrastructure destination, I am confused at what ESX is, why and how I use it. Any help with this process would be more than appreciated!

    Read the article

  • Using Squid on Debian, Cannot Connect Error

    - by Zed Said
    I am trying to set up Squid on Debian and am getting a connection refused error: squidclient http://www.apple.com/ > test client: ERROR: Cannot connect to 127.0.0.1:3128: Connection refused Here is my config: visible_hostname none cache_effective_user proxy cache_effective_group proxy cache_dir ufs /var/spool/squid 2048 16 256 cache_mem 512 MB cache_access_log /var/log/squid/access.log emulate_httpd_log on strip_query_terms off read_ahead_gap 128 Kb collapsed_forwarding on refresh_stale_hit 30 seconds retry_on_error on maximum_object_size_in_memory 1 MB acl all src 0.0.0.0/0.0.0.0 acl purgehosts src 127.0.0.1/255.255.255.255 # Caching static objects in __data is important. # Without that, apache processes sit around spooling static objects. acl QUERY urlpath_regex /cgi-bin/ /_edit /_admin /_login /_nocache /_recache /__lib /__fudge acl PURGE method PURGE acl POST method POST cache deny QUERY cache deny POST http_access allow PURGE purgehosts http_access deny PURGE http_access allow all http_port 127.0.0.1:80 http_port 50.56.206.139:80 cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest default redirect_rewrites_host_header off read_ahead_gap 128 Kb shutdown_lifetime 5 seconds Any ideas why this is happening? What have I missed?

    Read the article

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • Postfix: /usr/sbin/sendmail: No such file or directory - why?

    - by Zed Said
    I am trying to get postfix working, and when I test it using mail user I enter the subject, message, ect and get the following error: mail: /usr/sbin/sendmail: No such file or directory Can't send mail: sendmail process failed Why is it talking about sendmail? I deleted that a long time ago and am using postfix. Is it still hanging around somewhere and the mail command thinks it should be using sendmail?

    Read the article

  • Offline editable and mergeable wiki

    - by Zed
    I'm looking for a wiki software which allows me to store wiki pages on a server (basic wiki functionality) store a local copy on my computer edit my local content, and then merge my changes back to the server (version control for conflicting update) Is there a wiki, or a set of software which allows me to do this? The primary platform is linux, but I wouldn't mind if it also ran on windows...

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • Issues when upgrading OpenSSL?

    - by Zed Said
    We are running an old version of OpenSSL 0.9.7e and would like to upgrade to the most current. Our server is running Debian, and I am wondering if there would be any issues with just upgrading it using apt-get? Would we have to worry about anything breaking, or updating any configurations?

    Read the article

  • Convert mp4 video to a format xbox 360 can play

    - by Björn Lindqvist
    Here is a video file my Xbox 360 refuses to play: $ MP4Box -info video.mp4 * Movie Info * Timescale 90000 - Duration 02:18:33.365 Fragmented File no - 2 track(s) File Brand mp42 - version 0 Created: GMT Sat Jul 21 07:08:55 2012 File has root IOD (9 bytes) Scene PL 0xff - Graphics PL 0xff - OD PL 0xff Visual PL: ISO Reserved Profile (0x7f) Audio PL: High Quality Audio Profile @ Level 2 (0x0f) No streams included in root OD iTunes Info: Encoder Software: HandBrake 0.9.6 2012022800 Track # 1 Info - TrackID 1 - TimeScale 90000 - Duration 02:18:33.235 Media Info: Language "Undetermined" - Type "vide:avc1" - 199318 samples Visual Track layout: x=0 y=0 width=1280 height=688 MPEG-4 Config: Visual Stream - ObjectTypeIndication 0x21 AVC/H264 Video - Visual Size 1280 x 688 AVC Info: 1 SPS - 1 PPS - Profile High @ Level 4.1 NAL Unit length bits: 32 Self-synchronized Track # 2 Info - TrackID 2 - TimeScale 48000 - Duration 02:18:33.365 Media Info: Language "English" - Type "soun:mp4a" - 389689 samples MPEG-4 Config: Audio Stream - ObjectTypeIndication 0x40 MPEG-4 Audio MPEG-4 Audio AAC LC - 6 Channel(s) - SampleRate 48000 Synchronized on stream 1 $ avconv -i video.mp4 avconv version 0.8.4-4:0.8.4-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:33 with gcc 4.6.3 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42isomavc1 creation_time : 2012-07-21 07:08:55 encoder : HandBrake 0.9.6 2012022800 Duration: 02:18:33.36, start: 0.000000, bitrate: 2299 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x688, 1973 kb/s, 23.98 fps, 90k tbr, 90k tbn, 180k tbc Metadata: creation_time : 2012-07-21 07:08:55 Stream #0.1(eng): Audio: aac, 48000 Hz, 5.1, s16, 319 kb/s Metadata: creation_time : 2012-07-21 07:08:55 At least one output file must be specified What tool, such as ffmpeg or mencoder, and what magic command line incantation should I use to transcode this file into a format Xbox 360 can play? I want the transcode process to retain as good video quality as possible.

    Read the article

  • Is there anything wrong with my texture loading method ?

    - by José Joel.
    I'm a noob in openGL and trying to learn as much as possible. I'm using this method to load my openGL textures, loading every .png as RGBA4444. I'm doing anything incorrect ? - (void)loadTexture:(NSString*)nombre { CGImageRef textureImage =[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:nombre ofType:nil]].CGImage; if (textureImage == nil) { NSLog(@"Failed to load texture image"); return; } textureWidth = NextPowerOfTwo(CGImageGetWidth(textureImage)); textureHeight = NextPowerOfTwo(CGImageGetHeight(textureImage)); imageSizeX= CGImageGetWidth(textureImage); imageSizeY= CGImageGetHeight(textureImage); GLubyte *textureData = (GLubyte *)calloc(1,textureWidth * textureHeight * 4); // Por 4 pues cada pixel necesita 4 bytes, RGBA CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast ); CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage); //Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRGGGGBBBBAAAA" void *tempData = malloc(textureWidth * textureHeight * 2); unsigned int* inPixel32 = (unsigned int*)textureData; unsigned short* outPixel16 = (unsigned short*)tempData; for(int i = 0; i < textureWidth * textureHeight ; ++i, ++inPixel32) *outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | // R ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | // G ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | // B ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0); // A free(textureData); textureData = tempData; CGContextRelease(textureContext); glGenTextures(1, &textures[0]); glBindTexture(GL_TEXTURE_2D, textures[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4 , textureData); free(textureData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); } And this is my dealloc method: - (void)dealloc { glDeleteTextures(1,textures); [super dealloc]; }

    Read the article

  • WPF application with MS Access database as a data source

    - by Kay Zed
    I have a Microsoft Access 2010 database. Now, using Visual Studio 2010, I want to create a WPF application and add the database as a data source. The app will have a window with a frame that provides navigation through pages. No problem so far. But: -What is the right way to set up the database in this scenario? Tables only? Or must everything go via queries? (VS2010 talks about views which I assume (?) are queries) -Database data must be updatable and records can be added. Some relationships go through link tables (many-to-many) and there are nullable foreign key relationships. Must I take manual steps to make it work? -While adding the data source VS2010 created an xsd from my Access database. I think the xsd might need further tweaking for the application to work the right way. What if I change my Access database design, I'd have to regenerate the xsd again as well. Is this right, and is it the way it is usually done? OR, should I let the original Access database go and give the application the capability to create new empty databases? -How do you provide controls in a page to step through the records in a table? Is there a special database control? -What is the way (WPF class?) to load records into the data context that displays in a page? (At this level it probably does not matter what type of data source it is.)

    Read the article

  • Designing a data model in VS2010 and generating ORM code, application

    - by Kay Zed
    Simply put: I have a database design in my head and I now want to use Visual Studio 2010 to create a WPF application. Key is to use the VS2010 tools to take much as possible manual work out of my hands. -The database engine is SQLite -ORM probably through DBLINQ -Use of LINQ -The application can create new, empty database instances -Easily maintainable (changes in data model possible) Q- How do I start designing the database model (visually) in Visual Studio 2010? Should this be an xsd? Do I do this in a separate project? Q- Next, how can I make the most use of VS2010 code generation tools to generate a business layer? Q- I suppose the business layer will be added as a Data Source (in another project?) and from there it's a rather generic data binding solution? I tried finding clear examples of this but it's a jungle out there, the hunt for a solution is NOT converging to one clear method.... :_(

    Read the article

  • AS3 - Event listener that only fires once

    - by Zed-K
    I'm looking for a way to add an EventListener which will automatically removes itself after the first time it fires, but I can't figure a way of doing this the way I want to. I found this function (here) : public class EventUtil { public static function addOnceEventListener(dispatcher:IEventDispatcher,eventType:String,listener:Function):void { var f:Function = function(e:Event):void { dispatcher.removeEventListener(eventType,f); listener(e); } dispatcher.addEventListener(eventType,f); } } But instead of having to write : EventUtil.addOnceEventListener( dispatcher, eventType, listener ); I would like to use it the usual way : dispatcher.addOnceEventListener( eventType, listener ); Has anybody got an idea of how this could be done? Any help would be greatly apprecitated. (I know that Robert Penner's Signals can do this, but I can't use them since it would mean a lot of code rewriting that I can't afford for my current project)

    Read the article

  • Flex 4 - Using .pfm/.pfb fonts

    - by Zed-K
    Hi everyone, I just switched to Flash Builder 4 & Flex 4 SDK, and it seems it's no longer possible to use a .pfm/.pfb font, either by embedding it or using it as a system font. I keep getting error messages, and Google can't find anybody having the same issue. I tried several methods: - copy/pasting the [Embed] statement which was working using Flex 3 SDK - installing the font and then try to simply call it by its name in a CSS declaration without embedding it ; seems to work for every .ttf and .otf system fonts, but not for .pfm/.pfm ones - using a Flash-generated swf which embeds the font So far none of these seems to work. Has anybody got an idea on how to achieve this? I actually don't care using a system font without embedding it as long as it works. I'll be really grateful if somebody could help me on this, I'm totally stuck and cannot use another font instead.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >