Search Results

Search found 271 results on 11 pages for 'drew g'.

Page 11/11 | < Previous Page | 7 8 9 10 11 

  • 2D Selective Gaussian Blur

    - by Joshua Thomas
    I am attempting to use Gaussian blur on a 2D platform game, selectively blurring specific types of platforms with different amounts. I am currently just messing around with simple test code, trying to get it to work correctly. What I need to eventually do is create three separate render targets, leave one normal, blur one slightly, and blur the last heavily, then recombine on the screen. Where I am now is I have successfully drawn into a new render target and performed the gaussian blur on it, but when I draw it back to the screen everything is purple aside from the platforms I drew to the target. This is my .fx file: #define RADIUS 7 #define KERNEL_SIZE (RADIUS * 2 + 1) //----------------------------------------------------------------------------- // Globals. //----------------------------------------------------------------------------- float weights[KERNEL_SIZE]; float2 offsets[KERNEL_SIZE]; //----------------------------------------------------------------------------- // Textures. //----------------------------------------------------------------------------- texture colorMapTexture; sampler2D colorMap = sampler_state { Texture = <colorMapTexture>; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; //----------------------------------------------------------------------------- // Pixel Shaders. //----------------------------------------------------------------------------- float4 PS_GaussianBlur(float2 texCoord : TEXCOORD) : COLOR0 { float4 color = float4(0.0f, 0.0f, 0.0f, 0.0f); for (int i = 0; i < KERNEL_SIZE; ++i) color += tex2D(colorMap, texCoord + offsets[i]) * weights[i]; return color; } //----------------------------------------------------------------------------- // Techniques. //----------------------------------------------------------------------------- technique GaussianBlur { pass { PixelShader = compile ps_2_0 PS_GaussianBlur(); } } This is the code I'm using for the gaussian blur: public Texture2D PerformGaussianBlur(Texture2D srcTexture, RenderTarget2D renderTarget1, RenderTarget2D renderTarget2, SpriteBatch spriteBatch) { if (effect == null) throw new InvalidOperationException("GaussianBlur.fx effect not loaded."); Texture2D outputTexture = null; Rectangle srcRect = new Rectangle(0, 0, srcTexture.Width, srcTexture.Height); Rectangle destRect1 = new Rectangle(0, 0, renderTarget1.Width, renderTarget1.Height); Rectangle destRect2 = new Rectangle(0, 0, renderTarget2.Width, renderTarget2.Height); // Perform horizontal Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget1); effect.CurrentTechnique = effect.Techniques["GaussianBlur"]; effect.Parameters["weights"].SetValue(kernel); effect.Parameters["colorMapTexture"].SetValue(srcTexture); effect.Parameters["offsets"].SetValue(offsetsHoriz); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(srcTexture, destRect1, Color.White); spriteBatch.End(); // Perform vertical Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget2); outputTexture = (Texture2D)renderTarget1; effect.Parameters["colorMapTexture"].SetValue(outputTexture); effect.Parameters["offsets"].SetValue(offsetsVert); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(outputTexture, destRect2, Color.White); spriteBatch.End(); // Return the Gaussian blurred texture. game.GraphicsDevice.SetRenderTarget(null); outputTexture = (Texture2D)renderTarget2; return outputTexture; } And this is the draw method affected: public void Draw(SpriteBatch spriteBatch) { device.SetRenderTarget(maxBlur); spriteBatch.Begin(); foreach (Brick brick in blueBricks) brick.Draw(spriteBatch); spriteBatch.End(); blue = gBlur.PerformGaussianBlur((Texture2D) maxBlur, helperTarget, maxBlur, spriteBatch); spriteBatch.Begin(); device.SetRenderTarget(null); foreach (Brick brick in redBricks) brick.Draw(spriteBatch); foreach (Brick brick in greenBricks) brick.Draw(spriteBatch); spriteBatch.Draw(blue, new Rectangle(0, 0, blue.Width, blue.Height), Color.White); foreach (Brick brick in purpleBricks) brick.Draw(spriteBatch); spriteBatch.End(); } I'm sorry about the massive brick of text and images(or not....new user, I tried, it said no), but I wanted to get my problem across clearly as I have been searching for an answer to this for quite a while now. As a side note, I have seen the bloom sample. Very well commented, but overly complicated since it deals in 3D; I was unable to take what I needed to learn form it. Thanks for any and all help.

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • Mac - Flash file not loaded in independent flash player

    - by Mugdha
    Hi, I am working on an independent application to play flash files on Mac. I have already done the same for Linux, and it works flawlessly but on mac for some reason flash is not drawing to my window. It is not throwing any kind of error too. I am using Flash player 10, that would mean that I am using the Core Graphics drawing model. I am able to send mouse events to flash and wrote a sample plugin to check if there was a problem in the context that I was sending, but my sample plugin draws properly to the window. I am getting a call for NPN_InvalidateRect twice and as a response I send an update Event back to flash. I drew a dummy rectangle to check that my context is correct. I have flipped the context to make the origin as top left corner. On doing right click on the debug version of the flash player it shows the following message: "Movie not loaded..." Can anyone give me any idea why is the content not being drawn? I would really appreciate the help, as I have been struggling with it for more than a month now. Here is a small log of the interaction that I have with flash: NPN_UserAgent Called NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVSupportsWindowless; return true NPN_SetValue Called for Variable - NPPVpluginTransparentBool; return true NPN_GetValue Called with variable NPNVsupportsCoreGraphicsBool; return true NPN_SetValue Called for Variable - NPNVpluginDrawingModel NPP_SetWindow (CoreGraphics): 0, window=0xebaa90, context=0xe4c930, window.x:0 window.y:22 window.width:480 window.height:270 NPP_HandleEvent(activateEvent) accepted:0 isActive: 1 NPP_HandleEvent(updateEvt) accepted: 1 NPN_UserAgent Called NPN_GetURLNotify Called with URL - javascript:top.location+"flashplugin_unique" NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPP_NewStream URL=/Users/mjain/Desktop/clock.swf MIME=application/x-shockwave-flash error=0 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPN_InvalidateRect Called NPP_Write responseURL=/Users/mjain/Desktop/clock.swf bytes=9925 total-delivered=9925/9925 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPP_DestroyStream responseURL=/Users/mjain/Desktop/clock.swf error=0 NPP_HandleEvent(updateEvt) accepted: 1 NPN_InvalidateRect Called NPP_HandleEvent(updateEvt) accepted: 1 NPP_NewStream URL=javascript:top.location+"flashplugin_unique" MIME=text/plain error=0 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPN_UserAgent Called NPP_Write responseURL=javascript:top.location+"flashplugin_unique" bytes=52 total-delivered=52/52 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPP_DestroyStream responseURL=javascript:top.location+"flashplugin_unique" error=0 NPP_URLNotify responseURL=javascript:top.location+"flashplugin_unique" reason=0 Thanks Mugdha.

    Read the article

  • How can I do something ~after~ an event has fired in C#?

    - by Siracuse
    I'm using the following project to handle global keyboard and mouse hooking in my C# application. This project is basically a wrapper around the Win API call SetWindowsHookEx using either the WH_MOUSE_LL or WH_KEYBOARD_LL constants. It also manages certain state and generally makes this kind of hooking pretty pain free. I'm using this for a mouse gesture recognition software I'm working on. Basically, I have it setup so it detects when a global hotkey is pressed down (say CTRL), then the user moves the mouse in the shape of a pre-defined gesture and then releases global hotkey. The event for the KeyDown is processed and tells my program to start recording the mouse locations until it receives the KeyUp event. This is working fine and it allows an easy way for users to enter a mouse-gesture mode. Once the KeyUp event fires and it detects the appropriate gesture, it is suppose to send certain keystrokes to the active window that the user has defined for that particular gesture they just drew. I'm using the SendKeys.Send/SendWait methods to send output to the current window. My problem is this: When the user releases their global hotkey (say CTRL), it fires the KeyUp event. My program takes its recorded mouse points and detects the relevant gesture and attempts to send the correct input via SendKeys. However, because all of this is in the KeyUp event, that global hotkey hasn't finished being processed. So, for example if I defined a gesture to send the key "A" when it is detected, and my global hotkey is CTRL, when it is detected SendKeys will send "A" but while CTRL is still "down". So, instead of just sending A, I'm getting CTRL-A. So, in this example, instead of physically sending the single character "A" it is selecting-all via the CTRL-A shortcut. Even though the user has released the CTRL (global hotkey), it is still being considered down by the system. Once my KeyUp event fires, how can I have my program wait some period of time or for some event so I can be sure that the global hotkey is truly no longer being registered by the system, and only then sending the correct input via SendKeys?

    Read the article

  • Adding negative and positive numbers in java without BigInt.

    - by liamg
    Hi, i'm trying to write a small java class. I have an object called BigNumber. I wrote method that add two positive numbers, and other method that subract two also positive numbers. Now i want them to handle negative numbers. So the i wrote couple of 'if' statements eg. if (this.sign == 1 /* means '+' */) { if (sn1.sign == 1) { if (this.compare(sn1) == -1 /* means this < sn1 */ ) return sn1.add(this); else return this.add(sn1); } etc. Unfortunately the code looks just ugly. Like a bush of if's and elses. Is there a better way to write that kind of code? Edit i can't just do this.add(sn1) beacuse sometimes i want to add positive number to negative one or negitve to negative. But add can handle only positive numbers. So i have to use basic math and for example: instead of add negative number to negative number i add this.abs() (absolute value of number) to sn1.abs() and return the result with opposite sign. Drew: this lines are from method _add. I use this method to decide what to do with numbers it receive. Send them to add method? Or send them to subract method but with different order (sn1.subtract(this))? And so on.. if (this.sign == 1) { if (sn1.sign == 1) { if (this.compare(sn1) == -1) return sn1.add(this); else return this.add(sn1); } else if (wl1.sign == 0) return this; else { if (this.compare(sn1.abs()) == 1) return this.subtract(sn1.abs()); else if (this.compare(sn1.abs()) == 0) return new BigNumber(0); else return sn1.abs().subtract(this).negate(); // return the number with opposite sign; } } else if (this.sign == 0) return sn1; else { if (wl1.sign == 1) { if (this.abs().compare(sn1) == -1) return sn1.subtract(this.abs()); else if (this.abs().compare(sn1) == 0) return new BigNumber(0); else return this.abs().subtract(sn1).negate(); } else if (sn1.sign == 0) return this; else return (this.abs().add(wl1.abs())).negate(); } As you can see - this code looks horrible..

    Read the article

  • Blackberry ListField Text Wrapping - only two lines.

    - by Diego Tori
    Within my ListField, I want to be able to take any given long String, and just be able to wrap the first line within the width of the screen, and just take the remaining string and display it below and ellipsis the rest. Right now, this is what I'm using to detect wrapping within my draw paint call: int totalWidth = 0; int charWidth = 0; int lastIndex = 0; int spaceIndex = 0; int lineIndex = 0; String firstLine = ""; String secondLine = ""; boolean isSecondLine = false; for (int i = 0; i < longString.length(); i++){ charWidth = Font.getDefault().getAdvance(String.valueOf(longString.charAt(i))); //System.out.println("char width: " + charWidth); if(longString.charAt(i) == ' ') spaceIndex = i; if((charWidth + totalWidth) > (this.getWidth()-32)){ //g.drawText(longString.substring(lastIndex, spaceIndex), xpos, y +_padding, DrawStyle.LEFT, w - xpos); lineIndex++; System.out.println("current lines to draw: " + lineIndex); /*if (lineIndex = 2){ int idx = i; System.out.println("first line " + longString.substring(lastIndex, spaceIndex)); System.out.println("second line " + longString.substring(spaceIndex+1, longString.length())); }*/ //firstLine = longString.substring(lastIndex, spaceIndex); firstLine = longString.substring(0, spaceIndex); //System.out.println("first new line: " +firstLine); //isSecondLine=true; //xpos = 0; //y += Font.getDefault().getHeight(); i = spaceIndex + 1; lastIndex = i; System.out.println("Rest of string: " + longString.substring(lastIndex, longString.length())); charWidth = 0; totalWidth = 0; } totalWidth += charWidth; System.out.println("total width: " + totalWidth); //g.drawText(longString.substring(lastIndex, i+1), xpos, y + (_padding*3)+4, DrawStyle.ELLIPSIS, w - xpos); //secondLine = longString.substring(lastIndex, i+1); secondLine = longString.substring(lastIndex, longString.length()); //isSecondLine = true; } Now this does a great job of actually wrapping any given string (assuming the y values were properly offsetted and it only drew the text after the string width exceeded the screen width, as well as the remaining string afterwards), however, every time I try to get the first two lines, it always ends up returning the last two lines of the string if it goes beyond two lines. Is there a better way to do this sort of thing, since I am fresh out of ideas?

    Read the article

  • Design Technique: How to design a complex system for processing orders, products and units.

    - by Shyam
    Hi, Programming is fun: I learned that by trying out simple challenges, reading up some books and following some tutorials. I am able to grasp the concepts of writing with OO (I do so in Ruby), and write a bit of code myself. What bugs me though is that I feel re-inventing the wheel: I haven't followed an education or found a book (a free one that is) that explains me the why's instead of the how's, and I've learned from the A-team that it is the plan that makes it come together. So, armed with my nuby Ruby skills, I decided I wanted to program a virtual store. I figured out the following: My virtual Store will have: Products and Services Inventories Orders and Shipping Customers Now this isn't complex at all. With the help of some cool tools (CMapTools), I drew out some concepts, but quickly enough (thanks to my inferior experience in designing), my design started to bite me. My very first product-line were virtual "laptops". So, I created a class (Ruby): class Product attr_accessor :name, :price def initialize(name, price) @name = name @price = price end end which can be instantiated by doing (IRb) x = Product.new("Banana Pro", 250) Since I want my virtual customers to be able to purchase more than one product, or various types, I figured out I needed some kind of "Order" mechanism. class Order def initialize(order_no) @order_no = order_no @line_items = [] end def add_product(myproduct) @line_items << myproduct end def show_order() puts @order_no @line_items.each do |x| puts x.name.to_s + "\t" + x.price.to_s end end end that can be instantiated by doing (IRb) z = Order.new(1234) z.add_product(x) z.show_order Splendid, I have now a very simple ordering system that allows me to add products to an order. But, here comes my real question. What if I have three models of my product (economy, business, showoff)? Or have my products be composed out of separate units (bigger screen, nicer keyboard, different OS)? Surely I could make them three separate products, or add complexity to my product class, but I am looking for are best practices to design a flexible product object that can be used in the real world, to facilitate a complex system. My apologies if my grammar and my spelling are with error, as english is not my first language and I took the time to check as far I could understand and translate properly! Thank you for your answers, comments and feedback!

    Read the article

  • Passing password value through URL

    - by Steven Wright
    OK I see a lot of people asking about passing other values, URLS, random stuff through a URL, but don't find anything about sending a password to a password field. Here is my situation: I have a ton of sites I use on a daily basis with my work and oh about 90% require logins. Obviously remembering 80 bajillion logins for each site is dumb, especially when there are more than one user name I use for each site. So to make life easier, I drew up a nifty JSP app that stores all of my logins in a DB table and creates a user interface for the specific page I want to visit. Each page has a button that sends a username, password into the id parameters of the html inputs. Problem: I can get the usernames and other info to show up just dandy, but when I try and send a password to a password field, it seems that nothing gets received by the page I'm trying to hit. Is there some ninja stuff I need to be doing here or is it just not easily possible? Basically this is what I do now: http://addresshere/support?loginname=steveoooo&loginpass=passwordhere and some of my html looks like this: <form name="userform" method="post" action="index.jsp" > <input type="hidden" name="submit_login" value="y"> <table width="100%"> <tr class="main"> <td width="100" nowrap>Username:</td> <td><input type="text" name="loginname" value="" size="30" maxlength="64"></td> </tr> <tr class="main"> <td>Password: </font></td> <td><input type="password" name="loginpass" value="" size="30" maxlength="64"></td> </tr> <tr class="main"> <td><center><input type="submit" name="submit" value="Login"></center></td> </tr> </table> </form> Any suggestions?

    Read the article

  • Sprite not moving when using a function from another class SFML c++

    - by user2892932
    I have a Game.cpp, and I am calling a update function in my Player class. In my player update Function I have it to check for keyboard input, and it seems to work, but whenever I try to call the .move() function, it seems to not work. I get no errors either. I am new to sfml, and decent with c++. Help is appreciated! #include "Player.h" Player::Player(void): vel(0), maxvel(100) { Load("Assets/sss.png",true); } void Player::Update(sf::Sprite& p) { if (sf::Keyboard::isKeyPressed(sf::Keyboard::A)) { moveObject(-3,0, p); } if(sf::Keyboard::isKeyPressed(sf::Keyboard::D)) { moveObject(-3,0, p); } } Player::~Player(void) { } This is the GameObject cpp #include "GameObject.h" #include <iostream> GameObject::GameObject(void) { isLoaded = false; } void GameObject::Load(std::string flname, bool isPlayer) { if(!tex.loadFromFile(flname)) { EXIT_FAILURE; } else { if(isPlayer) { if(!tex.loadFromFile(flname, sf::IntRect(0,0,33,33))) { EXIT_FAILURE; } else { std::cout << "Loading image" << "\n"; filename = flname; spr.setTexture(tex); isLoaded = true; } } else { std::cout << "Loading image" << "\n"; filename = flname; spr.setTexture(tex); isLoaded = true; } } } void GameObject::Draw(sf::RenderWindow & window) { if(isLoaded) { window.draw(spr); window.display(); std::cout << "Sprite drew" << "\n"; } } void GameObject::setPos(float x, float y) { if(isLoaded) { spr.setPosition(x,y); } } sf::Vector2f GameObject::GetObjPos() { return spr.getPosition(); } sf::Sprite& GameObject::getSprite() { return spr; } void GameObject::moveObject(float x, float y, sf::Sprite& sp) { sp.move(x, y); } GameObject::~GameObject(void) { }

    Read the article

  • Unable to get Stencil Buffer to work in iOS 4+ (5.0 works fine). [OpenGL ES 2.0]

    - by MurderDev
    So I am trying to use a stencil buffer in iOS for masking/clipping purposes. Do you guys have any idea why this code may not work? This is everything I have associated with Stencils. On iOS 4 I get a black screen. On iOS 5 I get exactly what I expect. The transparent areas of the image I drew in the stencil are the only areas being drawn later. Code is below. This is where I setup the frameBuffer, depth and stencil. In iOS the depth and stencil are combined. -(void)setupDepthBuffer { glGenRenderbuffers(1, &depthRenderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, self.frame.size.width * [[UIScreen mainScreen] scale], self.frame.size.height * [[UIScreen mainScreen] scale]); } -(void)setupFrameBuffer { glGenFramebuffers(1, &frameBuffer); glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); // Check the FBO. if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) { NSLog(@"Failure with framebuffer generation: %d", glCheckFramebufferStatus(GL_FRAMEBUFFER)); } } This is how I am setting up and drawing the stencil. (Shader code below.) glEnable(GL_STENCIL_TEST); glDisable(GL_DEPTH_TEST); glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask(GL_FALSE); glStencilFunc(GL_ALWAYS, 1, -1); glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); glColorMask(0, 0, 0, 0); glClear(GL_STENCIL_BUFFER_BIT); machineForeground.shader = [StencilEffect sharedInstance]; [machineForeground draw]; machineForeground.shader = [BasicEffect sharedInstance]; glDisable(GL_STENCIL_TEST); glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthMask(GL_TRUE); Here is where I am using the stencil. glEnable(GL_STENCIL_TEST); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glStencilFunc(GL_EQUAL, 1, -1); ...Draw Stuff here glDisable(GL_STENCIL_TEST); Finally here is my fragment shader. varying lowp vec2 TexCoordOut; uniform sampler2D Texture; void main(void) { lowp vec4 color = texture2D(Texture, TexCoordOut); if(color.a < 0.1) gl_FragColor = color; else discard; }

    Read the article

  • Rockmelt, the technology adoption model, and Facebook's spare internet

    - by Roger Hart
    Regardless of how good it is, you'd have to have a heart of stone not to make snide remarks about Rockmelt. After all, on the surface it looks a lot like some people spent two years building a browser instead of just bashing out a Chrome extension over a wet weekend. It probably does some more stuff. I don't know for sure because artificial scarcity is cool, apparently, so the "invitation" is still in the post*. I may in fact never know for sure, because I'm not wild about Facebook sign-in as a prerequisite for anything. From the video, and some initial reviews, my early reaction was: I have a browser, I have a Twitter client; what on earth is this for? The answer, of course, is "not me". Rockmelt is, in a way, quite audacious. Oh, sure, on launch day it's Bay Area bar-chat for the kids with no lenses in their retro specs and trousers that give you deep-vein thrombosis, but it's not really about them. Likewise,  Facebook just launched Google Wave, or something. And all the tech snobbery and scorn packed into describing it that way is irrelevant next to what they're doing with their platform. Here's something I drew in MS Paint** because I don't want to get sued: (see: The technology adoption lifecycle) A while ago in the Guardian, John Lanchester dusted off the idiom that "technology is stuff that doesn't work yet". The rest of the article would be quite interesting if it wasn't largely about MySpace, and he's sort of got a point. If you bolt on the sentiment that risk-averse businessmen like things that work, you've got the essence of Crossing the Chasm. Products for the mainstream market don't look much like technology. Think for  a second about early (1980s ish) hi-fi systems, with all the knobs and fiddly bits, their ostentatious technophile aesthetic. Then consider their sleeker and less (or at least less conspicuously) functional successors in the 1990s. The theory goes that innovators and early adopters like technology, it's a hobby in itself. The rest of the humans seem to like magic boxes with very few buttons that make stuff happen and never trouble them about why. Personally, I consider Apple's maddening insistence that iTunes is an acceptable way to move files around to be more or less morally unacceptable. Most people couldn't care less. Hence Rockmelt, and hence Facebook's continued growth. Rockmelt looks pointless to me, because I aggregate my social gubbins with Digsby, or use TweetDeck. But my use case is different and so are my enthusiasms. If I want to share photos, I'll use Flickr - but Facebook has photo sharing. If I want a short broadcast message, I'll use Twitter - Facebook has status updates. If I want to sell something with relatively little hassle, there's eBay - or Facebook marketplace. YouTube - check, FB Video. Email - messaging. Calendaring apps, yeah there are loads, or FB Events. What if I want to host a simple web page? Sure, they've got pages. Also Notes for blogging, and more games than I can count. This stuff is right there, where millions and millions of users are already, and for what they need it just works. It's not about me, because I'm not in the big juicy area under the curve. It's what 1990s portal sites could never have dreamed of achieving. Facebook is AOL on speed, crack, and some designer drugs it had specially imported from the future. It's a n00b-friendly gateway to the internet that just happens to serve up all the things you want to do online, right where you are. Oh, and everybody else is there too. The price of having all this and the social graph too is that you have all of this, and the social graph too. But plenty of folks have more incisive things to say than me about the whole privacy shebang, and it's not really what I'm talking about. Facebook is maintaining a vast, and fairly fully-featured training-wheels internet. And it makes up a large proportion of the online experience for a lot of people***. It's the entire web (2.0?) experience for the early and late majority. And sure, no individual bit of it is quite as slick or as fully-realised as something like Flickr (which wows me a bit every time I use it. Those guys are good at the web), but it doesn't have to be. It has to be unobtrusively good enough for the regular humans. It has to not feel like technology. This is what Rockmelt sort of is. You're online, you want something nebulously social, and you don't want to faff about with, say, Twitter clients. Wow! There it is on a really distracting sidebar, right in your browser. No effort! Yeah - fish nor fowl, much? It might work, I guess. There may be a demographic who want their social web experience more simply than tech tinkering, and who aren't just getting it from Facebook (or, for that matter, mobile devices). But I'd be surprised. Rockmelt feels like an attempt to grab a slice of Facebook-style "Look! It's right here, where you already are!", but it's still asking the mature market to install a new browser. Presumably this is where that Facebook sign-in predicate comes in handy, though it'll take some potent awareness marketing to make it fly. Meanwhile, Facebook quietly has the entire rest of the internet as a product management resource, and can continue to give most of the people most of what they want. Something that has not gone un-noticed in its potential to look a little sinister. But heck, they might even make Google Wave popular.     *This was true last week when I drafted this post. I got an invite subsequently, hence the screenshot. **MS Paint is no fun any more. It's actually good in Windows 7. Farewell ironically-shonky diagrams. *** It's also behind a single sign-in, lending a veneer of confidence, and partially solving the problem of usernames being crummy unique identifiers. I'll be blogging about that at some point.

    Read the article

  • Underwriting in a New Frontier: Spurring Innovation

    - by [email protected]
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Susan Keuer, product strategy manager for Oracle Insurance, shares her experiences and insight from the 2010 Association of Home Office Underwriters (AHOU) Annual Conference, April 11-14, in San Antonio, Texas    How can I be more innovative in underwriting?  It's a common question I hear from insurance carriers, producers and others, so it was no surprise that it was the key theme at the recent 2010 AHOU Annual Conference.  This year's event drew more than 900 insurance professionals involved in the underwriting process across life and annuities, property and casualty and reinsurance from around the globe, including the U.S., Canada, Australia, Bahamas, and more, to San Antonio - a Texas city where innovation transformed a series of downtown drainage canals into its premiere River Walk tourist destination.   CNN's Medical Correspondent Dr. Sanjay Gupta kicked off the conference with a phenomenal opening session that drove home the theme of the conference, "Underwriting in a New Frontier:  Spurring Innovation."   Drawing from his own experience as a neurosurgeon treating critically injured medical patients in the field in Iraq, Gupta inspired audience members to think outside the box during the underwriting process. He shared a compelling story of operating on a soldier who had suffered a head-related trauma in a field hospital.  With minimal supplies available Gupta used a Black and Decker saw to operate on the soldier's head and reduce pressure on his swelling brain. Drawing from this example, Gupta encouraged underwriters to think creatively, be innovative, and consider new tools and sources of information, such as social networking sites, during the underwriting process. So as you are looking at risk take into consideration all resources you have available.    Gupta also stressed the concept of IKIGAI - noting that individuals who believe that their life is worth living are less likely to die than are their counterparts without this belief.  How does one quantify this approach to life or thought process when evaluating risk?  Could this be something to consider as a "category" in the near future? How can this same belief in your own work spur innovation?   The role of technology was a hot topic of discussion throughout the conference.  Sessions delved into the latest in underwriting software to the rise of social media and how it is being increasingly integrated into underwriting process and solutions.  In one session a trio of panelists representing the carrier, producer and vendor communities stressed the importance to underwriters of leveraging new technology and the plethora of online information sources, which all could be used to accurately, honestly and consistently evaluate the risk throughout the underwriting process.   Another focused on the explosion of social media noting:  1.    Social media is growing exponentially - About eight percent of Americans used social media five years ago. Today about 46 percent of Americans do so, with 85 percent of financial services professionals using social media in their work.  2.    It will impact your business - Underwriters reconfirmed over and over that they are increasingly using "free" tools that are available in cyberspace in lieu of more costly solutions, such as inspection reports conducted by individuals in the field.  3.    Information is instantly available on the Web, anytime, anywhere - LinkedIn was mentioned as a way to connect to peers in the underwriting community and producers alike.  Many carriers and agents also are using Facebook to promote their company to customers - and as a point-of-entry to allow them to perform some functionality - such as accessing product marketing information versus directing users to go to the carrier's own proprietary website.  Other carriers have released their tight brand marketing to allow their producers to drive more business to their personal Facebook site where they offer innovative tools such as Application Capture or asking medical information in a more relaxed fashion.     Other key topics at the conference included the economy, ongoing industry consolidation, real-estate valuations as an asset and input into the underwriting process, and producer trends.  All stressed a "back to basics" approach for low cost, term products.   Finally, Connie Merritt, RN, PHN, entertained the large group of atttendees with audience-engaging insight on how to "Tame the Lions in Your Life - Dealing with Complainers, Bullies, Grump and Curmudgeon." Merritt noted "we are too busy for our own good." She shared how her overachieving personality had impacted her life.  Audience members then were asked to pick red, yellow, blue, or green shapes, without knowing that each one represented a specific personality trait.  For example, those who picked blue were the peacemakers. Those who choose yellow were social - the hint was to "Be Quiet Longer."  She then offered these "lion taming" steps:   1.    Admit It 2.    Accept It 3.    Let Go 4.    Be Present (which paralleled Gupta's IKIGAI concept)   When thinking about underwriting I encourage you to be present in the moment and think creatively, but don't be afraid to look ahead to the future and be an innovator.  I hope to see you at next year's AHOU Annual Conference, May 1-4, 2011 at The Mirage in Las Vegas, Nev.     Susan Keuer is the product strategy manager for new business underwriting.  She brings more than 20 years of insurance industry experience working with leading insurance carriers and technology companies to her role on the product strategy team for life/annuities solutions within the Oracle Insurance Global Business Unit  

    Read the article

  • Emit Knowledge - social network for knowledge sharing

    - by hajan
    Emit Knowledge, as the words refer - it's a social network for emitting / sharing knowledge from users by users. Those who can benefit the most out of this network is perhaps all of YOU who have something to share with others and contribute to the knowledge world. I've been closely communicating with the core team of this very, very interesting, brand new social network (with specific purpose!) about the concept, idea and the vision they have for their product and I can say with a lot of confidence that this network has real potential to become something from which we will all benefit. I won't speak much about that and would prefer to give you link and try it yourself - http://www.emitknowledge.com Mainly, through the past few months I've been testing this network and it is getting improved all the time. The user experience is great, you can easily find out what you need and it follows some known patterns that are common for all social networks. They have some real good ideas and plans that are already under development for the next updates of their product. You can do micro blogging or you can do regular normal blogging… it’s up to you, and the way it works, it is seamless. Here is a short Question and Answers (QA) interview I made with the lead of the team, Marijan Nikolovski: 1. Can you please explain us briefly, what is Emit Knowledge? Emit Knowledge is a brand new knowledge based social network, delivering quality content from users to users. We believe that people’s knowledge, experience and professional thoughts compose quality content, worth sharing among millions around the world. Therefore, we created the platform that matches people’s need to share and gain knowledge in the most suitable and comfortable way. Easy to work with, Emit Knowledge lets you to smoothly craft and emit knowledge around the globe. 2. How 'old' is Emit Knowledge? In hamster’s years we are almost five years old start-up :). Just kidding. We’ve released our public beta about three months ago. Our official release date is 27 of June 2012. 3. How did you come up with this idea? Everything started from a simple idea to solve a complex problem. We’ve seen that the social web has become polluted with data and is on the right track to lose its base principles – socialization and common cause. That was our start point. We’ve gathered the team, drew some sketches and started to mind map the idea. After several idea refactoring’s Emit Knowledge was born. 4. Is there any competition out there in the market? Currently we don't have any competitors that share the same cause. What makes our platform different is the ideology that our product promotes and the functionalities that our platform offers for easy socialization based on interests and knowledge sharing. 5. What are the main technologies used to build Emit Knowledge? Emit Knowledge was built on a heterogeneous pallet of technologies. Currently, we have four of separation: UI – Built on ASP.NET MVC3 and Knockout.js; Messaging infrastructure – Build on top of RabbitMQ; Background services – Our in-house solution for job distribution, orchestration and processing; Data storage – Build on top of MongoDB; What are the main reasons you've chosen ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 6. What are the main reasons for choosing ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 7. Did you use some of the latest Microsoft technologies? If yes, which ones? Yes, we like to rock the cutting edge tech house. Currently we are using Microsoft’s latest technologies like ASP.NET MVC, Web API (work in progress) and the best for the last; we are utilizing Windows Azure IaaS to the bone. 8. Can you please tell us shortly, what would be the benefit of regular bloggers in other blogging platforms to join Emit Knowledge? Well, unless you are some of the smoking ace gurus whose blogs are followed by a large number of users, our platform offers knowledge based segregated community equipped with tools that will enable both current and future users to expand their relations and to self-promote in the community based on their activity and knowledge sharing. 10. I see you are working very intensively and there is already integration with some third-party services to make the process of sharing and emitting knowledge easier, which services did you integrate until now and what do you plan do to next? We have “reemit” functionality for internal sharing and we also support external services like: Twitter; LinkedIn; Facebook; For the regular bloggers we have an extra cream, Windows Live Writer support for easy blog posts emitting. 11. What should we expect next? Currently, we are working on a new fancy community feature. This means that we are going to support user groups to be formed. So for all existing communities and user groups out there, wait us a little bit, we are coming for rescue :). One of the top next features they are developing is the Community Feature. It means, if you have your own User Group, Community Group or any other Group on which you and your users are mostly blogging or sharing (emitting) knowledge in various ways, Emit Knowledge as a platform will help you have everything you need to promote your group, make new followers and host all the necessary stuff that you have had need of. I would invite you to try the network and start sharing knowledge in a way that will help you gather new followers and spread your knowledge faster, easier and in a more efficient way! Let’s Emit Knowledge!

    Read the article

  • UIScrollView zoomToRect not zooming to given rect (created from UITouch CGPoint)

    - by pmhart
    My application has a UIScrollView with one subview. The subview is an extended UIView which prints a PDF page to itself using layers in the drawLayer event. Zooming using the built in pinching works great. setZoomScale also works as expected. I have been struggling with the zoomToRect function. I found an example online which makes a CGRect zoomRect variable from a given CGPoint. In the touchesEnded function, if there was a double tap and they are all the way zoomed out, I want to zoom in to that PDFUIView I created as though they were pinching out with the center of the pinch where they double tapped. So assume that I pass the UITouch variable to my function which utilizes zoomToRect if they double tap. I started with the following function I found on apples site: http://developer.apple.com/iphone/library/documentation/WindowsViews/Conceptual/UIScrollView_pg/ZoomZoom/ZoomZoom.html The following is a modified version for my UIScrollView extended class: - (void)zoomToCenter:(float)scale withCenter:(CGPoint)center { CGRect zoomRect; zoomRect.size.height = self.frame.size.height / scale; zoomRect.size.width = self.frame.size.width / scale; zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0); zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0); //return zoomRect; [self zoomToRect:zoomRect animated:YES]; } When I do this, the UIScrollView seems to zoom using the bottom right edge of the zoomRect above and not the center. If I make UIView like this UIView *v = [[UIView alloc] initWithFrame:zoomRect]; [v setBackgroundColor:[UIView redColor]]; [self addSubview:v]; The red box shows up with the touch point dead in the center. Please note: I am writing this from my PC, I recall messing around with the divided by two part on my Mac, so just assume that this draws a rect with the touch point in the center. If the UIView drew off center but zoomed to the right spot it would be all good. However, what happens is when it preforms the zoomToRect it seems to use the bottom right off the zoomRect at the top left of the zoomed in results. Also, I noticed that depending on where I click on the UIScrollView, it anchors to diffrent spots. It almost seems like there is a cross down the middle and it's reflecting the points somehow as though anywhere left of the middle is a negative reflection and anywhere right of the middle is a positive reflection? This seems to complicated, shouldn't it just zoom to the rect that was drawn as the UIView was able to draw? I used a lot of research to figure out how to create a PDF that scales in high quality, so I am assuming that using the CALayer may be throwing off the coordinate system? But to the UIScrollView it should just treat it as a view with 768x985 dimensions. This is sort of advanced, please assume the code for creating the zoomRect is all good. There is something deeper with the CALayer in the UIView which is in the UIScrollView....

    Read the article

  • How can i make a gallery with web images from my website?

    - by ronhdoge
    I currently have two codes that i am trying to figure out how to mix or write a new code to have a gallery with image doing fill_parent but i can slide sideways to see the other url linked images from my site. here are both parts of code i have: package com.mandarich.gallerygrid; import java.io.InputStream; import java.net.URL; import android.app.Activity; import android.graphics.drawable.Drawable; import android.os.Bundle; import android.widget.ImageView; public class Gallery extends Activity { /** Called when the activity is first created. */ public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ImageView imgView =(ImageView)findViewById(R.id.ImageView01); Drawable drawable = LoadImageFromWebOperations("http://www.mandarichmodels.com/hot-pics/"); imgView.setImageDrawable(drawable); } private Drawable LoadImageFromWebOperations(String url) { try { InputStream is = (InputStream) new URL(url).getContent(); Drawable d = Drawable.createFromStream(is, "src name"); return d; }catch (Exception e) { System.out.println("Exc="+e); return null; } } } and here is my gallery code package com.mandarich.Grid; import android.app.Activity; import android.content.Context; import android.content.res.TypedArray; import android.os.Bundle; import android.view.View; import android.view.ViewGroup; import android.widget.AdapterView; import android.widget.BaseAdapter; import android.widget.Gallery; import android.widget.ImageView; import android.widget.Toast; import android.widget.AdapterView.OnItemClickListener; public class gridfinal extends Activity { private Gallery gallery; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); gallery = (Gallery) findViewById(R.id.examplegallery); gallery.setAdapter(new AddImgAdp(this)); gallery.setOnItemClickListener(new OnItemClickListener() { @SuppressWarnings("unchecked") public void onItemClick(AdapterView parent, View v, int position, long id) { // Displaying the position when the gallery item in clicked Toast.makeText(gridfinal.this, "Position=" + position, Toast.LENGTH_SHORT).show(); } }); } public class AddImgAdp extends BaseAdapter { int GalItemBg; private Context cont; // Adding images. private Integer[] Imgid = { R.drawable.cindy, R.drawable.clinton, R.drawable.colin, R.drawable.cybil, R.drawable.david, R.drawable.demi, R.drawable.drew }; public AddImgAdp(Context c) { cont = c; TypedArray typArray = obtainStyledAttributes(R.styleable.GalleryTheme); GalItemBg = typArray.getResourceId(R.styleable.GalleryTheme_android_galleryItemBackground, 0); typArray.recycle(); } public int getCount() { return Imgid.length; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { ImageView imgView = new ImageView(cont); imgView.setImageResource(Imgid[position]); // Fixing width & height for image to display imgView.setLayoutParams(new Gallery.LayoutParams(430, 370)); imgView.setScaleType(ImageView.ScaleType.FIT_XY); imgView.setBackgroundResource(GalItemBg); return imgView; } } }

    Read the article

  • CABasicAnimation not scrolling with rest of View

    - by morgman
    All, I'm working on reproducing the pulsing Blue circle effect that the map uses to show your own location... I layered a UIView over a MKMapView. The UIView contains the pulsing animation. I ended up using the following code gleaned from numerous answers here on stackoverflow: CABasicAnimation* pulseAnimation = [CABasicAnimation animationWithKeyPath:@"opacity"]; pulseAnimation.toValue = [NSNumber numberWithFloat: 0.0]; pulseAnimation.duration = 1.0; pulseAnimation.repeatCount = HUGE_VALF; pulseAnimation.autoreverses = YES; pulseAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; [self.layer addAnimation:pulseAnimation forKey:@"pulseAnimation"]; CABasicAnimation* resizeAnimation = [CABasicAnimation animationWithKeyPath:@"bounds.size"]; resizeAnimation.toValue = [NSValue valueWithCGSize:CGSizeMake(0.0f, 0.0f)]; resizeAnimation.fillMode = kCAFillModeBoth; resizeAnimation.duration = 1.0; resizeAnimation.repeatCount = HUGE_VALF; resizeAnimation.autoreverses = YES; resizeAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; [self.layer addAnimation:resizeAnimation forKey:@"resizeAnimation"]; This does an acceptable job of pulsing/fading a circle I drew in the UIView using: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 0.4); // And draw with a blue fill color CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 0.1); // Draw them with a 2.0 stroke width so they are a bit more visible. CGContextSetLineWidth(context, 2.0); CGContextAddEllipseInRect(context, CGRectMake(x-30, y-30, 60.0, 60.0)); CGContextDrawPath(context, kCGPathFillStroke); But I soon found that while this appears to work, as soon as I drag the underlying map to another position the animation screws up... While the position I want highlighted is centered on the screen the animation works ok, once I've dragged the position so it's no longer centered the animation now pulses starting at the center of the screen scaling up to center on the position dragged off center, and back again... A humorous and cool effect, but not what I was looking for. I realize I may have been lucky on several fronts. I'm not sure what I misunderstood. I think the animation is scaling the entire layer which just happens to have my circle drawn in the middle of it. So it works when centered but not when off center. I tried the following gleaned from one of the questions suggested by stackoverflow when I started this question: CABasicAnimation* translateAnimation = [CABasicAnimation animationWithKeyPath:@"position"]; translateAnimation.fromValue = [NSValue valueWithCGPoint:CGPointMake(oldx, oldy )]; translateAnimation.toValue = [NSValue valueWithCGPoint:CGPointMake(x, y )]; // translateAnimation.fillMode = kCAFillModeBoth; translateAnimation.duration = 1.0; translateAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; [self.layer addAnimation:translateAnimation forKey:@"translateAnimation"]; But it doesn't help much, when I play with it sometimes it looks like once time it animates in the correct spot after I've moved it offcenter, but then it switches back to the animating from the center point to the new location. Sooo, any suggestions or do I need to provide additional information.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Handling TclErrors in Python

    - by anteater7171
    In the following code I'll get the following error if I right click the window that pops up. Then go down to the very bottom entry widget then delete it's contents. It seems to be giving me a TclError. How do I go about handeling such an error? The Error Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\Lib\lib-tk\Tkinter.py", line 1410, in __call__ return self.func(*args) File "C:\Python26\CPUDEMO.py", line 503, in I TL.sclS.set(S1) File "C:\Python26\Lib\lib-tk\Tkinter.py", line 2765, in set self.tk.call(self._w, 'set', value) TclError: expected floating-point number but got "" The Code #F #PIthon.py # Import/Setup import Tkinter import psutil,time import re from PIL import Image, ImageTk from time import sleep class simpleapp_tk(Tkinter.Tk): def __init__(self,parent): Tkinter.Tk.__init__(self,parent) self.parent = parent self.initialize() def initialize(self): Widgets self.menu = Tkinter.Menu(self, tearoff = 0 ) M = [ "Options...", "Exit"] self.selectedM = Tkinter.StringVar() self.menu.add_radiobutton( label = 'Hide', variable = self.selectedM, command = self.E ) self.menu.add_radiobutton( label = 'Bump', variable = self.selectedM, command = self.E ) self.menu.add_separator() self.menu.add_radiobutton( label = 'Options...', variable = self.selectedM, command = self.E ) self.menu.add_separator() self.menu.add_radiobutton( label = 'Exit', variable = self.selectedM, command = self.E ) self.frame1 = Tkinter.Frame(self,bg='grey15',relief='ridge',borderwidth=4,width=185, height=39) self.frame1.grid() self.frame1.grid_propagate(0) self.frame1.bind( "<Button-3><ButtonRelease-3>", self.D ) self.frame1.bind( "<Button-2><ButtonRelease-2>", self.C ) self.frame1.bind( "<Double-Button-1>", self.C ) self.labelVariable = Tkinter.StringVar() self.label = Tkinter.Label(self.frame1,textvariable=self.labelVariable,fg="lightgreen",bg="grey15",borderwidth=1,font=('arial', 10, 'bold')) self.label.grid(column=1,row=0,columnspan=1,sticky='nsew') self.label.bind( "<Button-3><ButtonRelease-3>", self.D ) self.label.bind( "<Button-2><ButtonRelease-2>", self.C ) self.label.bind( "<Double-Button-1>", self.C ) self.F() self.overrideredirect(1) self.wm_attributes("-topmost", 1) global TL1 TL1 = Tkinter.Toplevel(self) TL1.wm_geometry("+0+5000") TL1.overrideredirect(1) TL1.button = Tkinter.Button(TL1,text="? CPU",fg="lightgreen",bg="grey15",activeforeground="lightgreen", activebackground='grey15',borderwidth=4,font=('Arial', 8, 'bold'),command=self.J) TL1.button.pack(ipadx=1) Events def Reset(self): self.label.configure(font=('arial', 10, 'bold'),fg='Lightgreen',bg='grey15',borderwidth=0) self.labela.configure(font=('arial', 8, 'bold'),fg='Lightgreen',bg='grey15',borderwidth=0) self.frame1.configure(bg='grey15',relief='ridge',borderwidth=4,width=224, height=50) self.label.pack(ipadx=38) def helpmenu(self): t2 = Tkinter.Toplevel(self) Tkinter.Label(t2, text='This is a help menu', anchor="w",justify="left",fg="darkgreen",bg="grey90",relief="ridge",borderwidth=5,font=('Arial', 10)).pack(fill='both', expand=1) t2.resizable(False,False) t2.title('Help') menu = Tkinter.Menu(self) t2.config(menu=menu) filemenu = Tkinter.Menu(menu) menu.add_cascade(label="| Exit |", menu=filemenu) filemenu.add_command(label="Exit", command=t2.destroy) def aboutmenu(self): t1 = Tkinter.Toplevel(self) Tkinter.Label(t1, text=' About:\n\n CPU Usage v1.0\n\n Publisher: Drew French\n Date: 05/09/10\n Email: [email protected] \n\n\n\n\n\n\n Written in Python 2.6.4', anchor="w",justify="left",fg="darkgreen",bg="grey90",relief="sunken",borderwidth=5,font=('Arial', 10)).pack(fill='both', expand=1) t1.resizable(False,False) t1.title('About') menu = Tkinter.Menu(self) t1.config(menu=menu) filemenu = Tkinter.Menu(menu) menu.add_cascade(label="| Exit |", menu=filemenu) filemenu.add_command(label="Exit", command=t1.destroy) def A (self,event): TL.entryVariable1.set(TL.sclY.get()) TL.entryVariable2.set(TL.sclX.get()) Y = TL.sclY.get() X = TL.sclX.get() self.wm_geometry("+" + str(X) + "+" + str(Y)) def B(self,event): Y1 = TL.entryVariable1.get() X1 = TL.entryVariable2.get() self.wm_geometry("+" + str(X1) + "+" + str(Y1)) TL.sclY.set(Y1) TL.sclX.set(X1) def C(self,event): s = self.wm_geometry() geomPatt = re.compile(r"(\d+)?x?(\d+)?([+-])(\d+)([+-])(\d+)") m = geomPatt.search(s) X3 = m.group(4) Y3 = m.group(6) M = int(Y3) - 150 P = M + 150 while Y3 > M: sleep(0.0009) Y3 = int(Y3) - 1 self.update_idletasks() self.wm_geometry("+" + str(X3) + "+" + str(Y3)) sleep(2.00) while Y3 < P: sleep(0.0009) Y3 = int(Y3) + 1 self.update_idletasks() self.wm_geometry("+" + str(X3) + "+" + str(Y3)) def D(self, event=None): self.menu.post( event.x_root, event.y_root ) def E(self): if self.selectedM.get() =='Options...': Setup global TL TL = Tkinter.Toplevel(self) menu = Tkinter.Menu(TL) TL.config(menu=menu) filemenu = Tkinter.Menu(menu) menu.add_cascade(label="| Menu |", menu=filemenu) filemenu.add_command(label="Instruction Manual...", command=self.helpmenu) filemenu.add_command(label="About...", command=self.aboutmenu) filemenu.add_separator() filemenu.add_command(label="Exit Options", command=TL.destroy) filemenu.add_command(label="Exit", command=self.destroy) helpmenu = Tkinter.Menu(menu) menu.add_cascade(label="| Help |", menu=helpmenu) helpmenu.add_command(label="Instruction Manual...", command=self.helpmenu) helpmenu.add_separator() helpmenu.add_command(label="Quick Help...", command=self.helpmenu) Title TL.label5 = Tkinter.Label(TL,text="CPU Usage: Options",anchor="center",fg="black",bg="lightgreen",relief="ridge",borderwidth=5,font=('Arial', 18, 'bold')) TL.label5.pack(padx=15,ipadx=5) X Y scale TL.separator = Tkinter.Frame(TL,height=7, bd=1, relief='ridge', bg='grey95') TL.separator.pack(pady=5,padx=5) # TL.sclX = Tkinter.Scale(TL.separator, from_=0, to=1500, orient='horizontal', resolution=1, command=self.A) TL.sclX.grid(column=1,row=0,ipadx=27, sticky='w') TL.label1 = Tkinter.Label(TL.separator,text="X",anchor="s",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label1.grid(column=0,row=0, pady=1, sticky='S') TL.sclY = Tkinter.Scale(TL.separator, from_=0, to=1500, resolution=1, command=self.A) TL.sclY.grid(column=2,row=1,rowspan=2,sticky='e', padx=4) TL.label3 = Tkinter.Label(TL.separator,text="Y",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label3.grid(column=2,row=0, padx=10, sticky='e') TL.entryVariable2 = Tkinter.StringVar() TL.entry2 = Tkinter.Entry(TL.separator,textvariable=TL.entryVariable2, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entry2.grid(column=1,row=1,ipadx=20, pady=10,sticky='EW') TL.entry2.bind("<Return>", self.B) TL.label2 = Tkinter.Label(TL.separator,text="X:",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label2.grid(column=0,row=1, ipadx=4, sticky='W') TL.entryVariable1 = Tkinter.StringVar() TL.entry1 = Tkinter.Entry(TL.separator,textvariable=TL.entryVariable1, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entry1.grid(column=1,row=2,sticky='EW') TL.entry1.bind("<Return>", self.B) TL.label4 = Tkinter.Label(TL.separator,text="Y:", anchor="center",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label4.grid(column=0,row=2, ipadx=4, sticky='W') TL.label7 = Tkinter.Label(TL.separator,text="Text Colour:",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label7.grid(column=1,row=3,stick="W",ipady=10) TL.selectedP = Tkinter.StringVar() TL.opt1 = Tkinter.OptionMenu(TL.separator, TL.selectedP,'Normal', 'White','Black', 'Blue', 'Steel Blue','Green','Light Green','Yellow','Orange' ,'Red',command=self.G) TL.opt1.config(fg="black",bg="grey90",activebackground="grey90",activeforeground="black", anchor="center",relief="raised",direction='right',font=('Arial', 10)) TL.opt1.grid(column=1,row=4,sticky='EW',padx=20,ipadx=20) TL.selectedP.set('Normal') TL.label7 = Tkinter.Label(TL.separator,text="Refresh Rate:",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label7.grid(column=1,row=5,stick="W",ipady=10) TL.sclS = Tkinter.Scale(TL.separator, from_=10, to=2000, orient='horizontal', resolution=10, command=self.H) TL.sclS.grid(column=1,row=6,ipadx=27, sticky='w') TL.sclS.set(650) TL.entryVariableS = Tkinter.StringVar() TL.entryS = Tkinter.Entry(TL.separator,textvariable=TL.entryVariableS, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entryS.grid(column=1,row=7,ipadx=20, pady=10,sticky='EW') TL.entryS.bind("<Return>", self.I) TL.entryVariableS.set(650) # TL.resizable(False,False) TL.title('Options') geomPatt = re.compile(r"(\d+)?x?(\d+)?([+-])(\d+)([+-])(\d+)") s = self.wm_geometry() m = geomPatt.search(s) X = m.group(4) Y = m.group(6) TL.sclY.set(Y) TL.sclX.set(X) if self.selectedM.get() == 'Exit': self.destroy() if self.selectedM.get() == 'Bump': s = self.wm_geometry() geomPatt = re.compile(r"(\d+)?x?(\d+)?([+-])(\d+)([+-])(\d+)") m = geomPatt.search(s) X3 = m.group(4) Y3 = m.group(6) M = int(Y3) - 150 P = M + 150 while Y3 > M: sleep(0.0009) Y3 = int(Y3) - 1 self.update_idletasks() self.wm_geometry("+" + str(X3) + "+" + str(Y3)) sleep(2.00) while Y3 < P: sleep(0.0009) Y3 = int(Y3) + 1 self.update_idletasks() self.wm_geometry("+" + str(X3) + "+" + str(Y3)) if self.selectedM.get() == 'Hide': s = self.wm_geometry() geomPatt = re.compile(r"(\d+)?x?(\d+)?([+-])(\d+)([+-])(\d+)") m = geomPatt.search(s) X3 = m.group(4) Y3 = m.group(6) M = int(Y3) + 5000 self.update_idletasks() self.wm_geometry("+" + str(X3) + "+" + str(M)) TL1.wm_geometry("+0+190") def F (self): G = round(psutil.cpu_percent(), 1) G1 = str(G) + '%' self.labelVariable.set(G1) try: S2 = TL.entryVariableS.get() except ValueError, e: S2 = 650 except NameError: S2 = 650 self.after(int(S2), self.F) def G (self,event): if TL.selectedP.get() =='Normal': self.label.config( fg = 'lightgreen' ) TL1.button.config( fg = 'lightgreen',activeforeground='lightgreen') if TL.selectedP.get() =='Red': self.label.config( fg = 'red' ) TL1.button.config( fg = 'red',activeforeground='red') if TL.selectedP.get() =='Orange': self.label.config( fg = 'orange') TL1.button.config( fg = 'orange',activeforeground='orange') if TL.selectedP.get() =='Yellow': self.label.config( fg = 'yellow') TL1.button.config( fg = 'yellow',activeforeground='yellow') if TL.selectedP.get() =='Light Green': self.label.config( fg = 'lightgreen' ) TL1.button.config( fg = 'lightgreen',activeforeground='lightgreen') if TL.selectedP.get() =='Normal': self.label.config( fg = 'lightgreen' ) TL1.button.config( fg = 'lightgreen',activeforeground='lightgreen') if TL.selectedP.get() =='Steel Blue': self.label.config( fg = 'steelblue1' ) TL1.button.config( fg = 'steelblue1',activeforeground='steelblue1') if TL.selectedP.get() =='Blue': self.label.config( fg = 'blue') TL1.button.config( fg = 'blue',activeforeground='blue') if TL.selectedP.get() =='Green': self.label.config( fg = 'darkgreen' ) TL1.button.config( fg = 'darkgreen',activeforeground='darkgreen') if TL.selectedP.get() =='White': self.label.config( fg = 'white' ) TL1.button.config( fg = 'white',activeforeground='white') if TL.selectedP.get() =='Black': self.label.config( fg = 'black') TL1.button.config( fg = 'black',activeforeground='black') def H (self,event): TL.entryVariableS.set(TL.sclS.get()) S = TL.sclS.get() def I (self,event): S1 = TL.entryVariableS.get() TL.sclS.set(S1) TL.sclS.set(TL.sclS.get()) S1 = TL.entryVariableS.get() TL.sclS.set(S1) def J (self): s = self.wm_geometry() geomPatt = re.compile(r"(\d+)?x?(\d+)?([+-])(\d+)([+-])(\d+)") m = geomPatt.search(s) X3 = m.group(4) Y3 = m.group(6) M = int(Y3) - 5000 self.update_idletasks() self.wm_geometry("+" + str(X3) + "+" + str(M)) TL1.wm_geometry("+0+5000") Loop if name == "main": app = simpleapp_tk(None) app.mainloop()

    Read the article

  • Using glDrawElements does not draw my .obj file

    - by Hallik
    I am trying to correctly import an .OBJ file from 3ds Max. I got this working using glBegin() & glEnd() from a previous question on here, but had really poor performance obviously, so I am trying to use glDrawElements now. I am importing a chessboard, its game pieces, etc. The board, each game piece, and each square on the board is stored in a struct GroupObject. The way I store the data is like this: struct Vertex { float position[3]; float texCoord[2]; float normal[3]; float tangent[4]; float bitangent[3]; }; struct Material { float ambient[4]; float diffuse[4]; float specular[4]; float shininess; // [0 = min shininess, 1 = max shininess] float alpha; // [0 = fully transparent, 1 = fully opaque] std::string name; std::string colorMapFilename; std::string bumpMapFilename; std::vector<int> indices; int id; }; //A chess piece or square struct GroupObject { std::vector<Material *> materials; std::string objectName; std::string groupName; int index; }; All vertices are triangles, so there are always 3 points. When I am looping through the faces f section in the obj file, I store the v0, v1, & v2 in the Material-indices. (I am doing v[0-2] - 1 to account for obj files being 1-based and my vectors being 0-based. So when I get to the render method, I am trying to loop through every object, which loops through every material attached to that object. I set the material information and try and use glDrawElements. However, the screen is black. I was able to draw the model just fine when I looped through each distinct material with all the indices associated with that material, and it drew the model fine. This time around, so I can use the stencil buffer for selecting GroupObjects, I changed up the loop, but the screen is black. Here is my render loop. The only thing I changed was the for loop(s) so they go through each object, and each material in the object in turn. void GLEngine::drawModel() { glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Vertex arrays setup glEnableClientState( GL_VERTEX_ARRAY ); glVertexPointer(3, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->position); glEnableClientState( GL_NORMAL_ARRAY ); glNormalPointer(GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->normal); glClientActiveTexture( GL_TEXTURE0 ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glTexCoordPointer(2, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->texCoord); glUseProgram(blinnPhongShader); objects = model.getObjects(); // Loop through objects... for( int i=0 ; i < objects.size(); i++ ) { ModelOBJ::GroupObject *object = objects[i]; // Loop through materials used by object... for( int j=0 ; j<object->materials.size() ; j++ ) { ModelOBJ::Material *pMaterial = object->materials[j]; glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, pMaterial->ambient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, pMaterial->diffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, pMaterial->specular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, pMaterial->shininess * 128.0f); // Draw faces, letting OpenGL loop through them glDrawElements( GL_TRIANGLES, pMaterial->indices.size(), GL_UNSIGNED_INT, &pMaterial->indices ); } } if (model.hasNormals()) glDisableClientState(GL_NORMAL_ARRAY); if (model.hasTextureCoords()) { glClientActiveTexture(GL_TEXTURE0); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } if (model.hasPositions()) glDisableClientState(GL_VERTEX_ARRAY); glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); glDisable(GL_BLEND); } I don't know what I am missing that's important. If it's also helpful, here is where I read a 'f' face line and store the info in the obj importer in the pMaterial-indices. else if (sscanf(buffer, "%d/%d/%d", &v[0], &vt[0], &vn[0]) == 3) // v/vt/vn { fscanf(pFile, "%d/%d/%d", &v[1], &vt[1], &vn[1]); fscanf(pFile, "%d/%d/%d", &v[2], &vt[2], &vn[2]); v[0] = (v[0] < 0) ? v[0] + numVertices - 1 : v[0] - 1; v[1] = (v[1] < 0) ? v[1] + numVertices - 1 : v[1] - 1; v[2] = (v[2] < 0) ? v[2] + numVertices - 1 : v[2] - 1; currentMaterial->indices.push_back(v[0]); currentMaterial->indices.push_back(v[1]); currentMaterial->indices.push_back(v[2]); Again, this worked drawing it all together only separated by materials, so I haven't changed code anywhere else except added the indices to the materials within objects, and the loop in the draw method. Before everything was showing up black, now with the setup as above, I am getting an unhandled exception write violation on the glDrawElements line. I did a breakpoint there, and there are over 600 elements in the pMaterial-indices array, so it's not empty, it has indices to use. When I set the glDrawElements like this, it gives me the black screen but no errors glDrawElements( GL_TRIANGLES, pMaterial->indices.size(), GL_UNSIGNED_INT, &pMaterial->indices[0] ); I have also tried adding this when I loop through the faces on import if ( currentMaterial->startIndex == -1 ) currentMaterial->startIndex = v[0]; currentMaterial->triangleCount++; And when drawing... //in draw method glDrawElements( GL_TRIANGLES, pMaterial->triangleCount * 3, GL_UNSIGNED_INT, model.getIndexBuffer() + pMaterial->startIndex );

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

< Previous Page | 7 8 9 10 11