Search Results

Search found 5564 results on 223 pages for 'rollover effect'.

Page 24/223 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • How do I add fade in effect when I click expand on this script?

    - by kohei
    Hi, I found this content expand/collapse jQuery plugin. I want to add fade-in effect to this plugin when I click on the EXPAND button. How do I do this? $(document).ready(function () { var maxlines = 15; var lineheight = 15; // line height in 'px' var maxheight = (maxlines * lineheight); var allowedExtraLines = 3; var showText = "EXPAND"; var hideText = "CLOSE"; $('.ranking').each(function () { var text = $(this); if (text.height() > maxheight + allowedExtraLines * lineheight) { text.css({ 'overflow': 'hidden', 'line-height': lineheight + 'px', 'height': maxheight + 'px' }); var link = $('<a href="#">' + showText + '</a>'); link.click(function (event) { event.preventDefault(); if (text.css('height') == 'auto') { $(this).html(showText); text.css('height', maxheight + 'px'); } else { //$(this).remove(); $(this).html(hideText); text.css('height', 'auto'); } }); var linkDiv = $('<div></div>'); linkDiv.append(link); $(this).after(linkDiv); } }); });

    Read the article

  • Why does the '#weight' property sometimes not have any effect in Drupal forms?

    - by Adrian
    Hello, I'm trying to create a node form for a custom type. I have organic groups and taxonomy both enabled, but want their elements to come out in a non-standard order. So I've implemented hook_form_alter and set the #weight property of the og_nodeapi subarray to -1000, but it still goes after taxonomy and menu. I even tried changing the subarray to a fieldset (to force it to actually be rendered), but no dice. I also tried setting $form['taxonomy']['#weight'] = 1000 (I have two vocabs so it's already being rendered as a fieldset) but that didn't work either. I set the weight of my module very high and confirmed in the system table that it is indeed the highest module on the site - so I'm all out of ideas. Any suggestions? Update: While I'm not exactly sure how, I did manage to get the taxonomy fieldset to sink below everything else, but now I have a related problem that's hopefully more manageable to understand. Within the taxonomy fieldset, I have two items (a tags and a multi-select), and I wanted to add some instructions in hook_form_alter as follows: $form['taxonomy']['instructions'] = array( '#value' => "These are the instructions", '#weight' => -1, ); You guessed it, this appears after the terms inserted by the taxonomy module. However, if I change this to a fieldset: $form['taxonomy']['instructions'] = array( '#type' => 'fieldset', // <-- here '#title' => 'Instructions', // <-- and here for good measure '#value' => "These are the instructions", '#weight' => -1, ); then it magically floats to the top as I'd intended. I also tried textarea (this also worked) and explicitly saying markup (this did not). So basically, changing the type from "markup" (the default IIRC) to "fieldset" has the effect of no longer ignoring its weight.

    Read the article

  • Bad idea to have the same object, have a different side effect after method call.

    - by Nathan W
    Hi all, I'm having a bit of a gesign issue(again). Say I have this Buttonpad object: now this object is a wrapper object over one in a com object. At the moment it has a method on it called CreateInto(IComObject). Now to make a new button pad in the Com Object. You do: ButtonPad pad = new ButtonPad(); pad.Title = "Hello"; // Set some more properties. pad.CreateInto(Cominstance); The createinfo method will excute the right commands to buid the button pad in the com object. After it has been created it any calls against it are foward to the underlying object for change so: pad.Title = "New title"; will call the com object to set the title and also set the internal title variable. Basically any calls before the CreateInfo method only affect the .NET object anything after has the side effect of calling the com object also. I'm not very good at sequence diagrams but here is my attempt to explain whats going on: This doesn't feel good to me, it feels like I'm lying to the user about what the button pad does. I was going to have a object called WrappedButtonPad, which is returned from CreateInto and the user could make calls against that to make changes to the Com Object, but I feel having two objects that almost do the same thing but only differ by names might be even worse. Are these valid designs, or am I right to be worried? How else would you handle a object the can create and query a com object?

    Read the article

  • why i change the alpha value of UIImageView but no effect?

    - by chancy
    I am working on a reader app. when you read one magazine, it will display first five pages and download the rest pages one by one. there is a scrollview to view the thumbnail image of pages. At the beginning, if the page needs downloading, the corresponding thumbnail view's alpha value is set to 0.5. when the page is downloaded, i will update the thumbnail view's value to 1.0. I use one operation to download the page, and when one is downloaded i use delegate to set thumbnail view's alpha. But when i update thumbnail view's alpha value, it still the same as the beginning. it seems the alpha has no effect. I wonder is there anything wrong with my code? some snippets are as follows: In the PageViewController.m - (void)downloadPages { DownloadOperation *op = [[DownloadOperation alloc] initWithMagazine:magazine]; op.delegate = self; [[(AppDelegate *)[[UIApplication sharedApplication] delegate] sharedOperationQueue] addOperation:op]; } - (void)downloadOperation:(DownloadOperation *)operation finishedAtIndex:(NSUInteger)index { if (thumbScrollView){ [thumbScrollView viewWithTag:THUMBVIEW_OFFSET+index].alpha = 1.0; } } In DownloadOperation.m - (void)main { // ... NSUInteger index = 0; for (Page *page in mMagazine.pages) { if (/*page doesn't exist*/){ // download the page; if ([delegate respondsToSelector:@selector(downloadOperation:finishedAtIndex:)]) { [delegate downloadOperation:self finishedAtIndex:index]; } } index++; } }

    Read the article

  • What is this effect called? 'Grow/shrink'? 'Fly-out/fly-in'? ...

    - by Majid
    Hi all, I have seen links that open modal windows AND have a nice animation effect that create the illusion that the window grows out of the link clicked. On closing the window a similar animation shows that the window shrinks and disappears in the link which originally opened it. I remember I saw it on some jquery page but don't remember where and don't know what this effect is called. Have you seen this? Examples?

    Read the article

  • Seemingly simple skinning problem in Flex4; style gives disco effect

    - by Cheradenine
    I'm doing something wrong, but I can't figure out what. Simple project in flex4, whereby I create a skinned combobox (fragments at end). If I turn on the 3 skin references (over-skin, up-skin, down-skin), the combobox appears to simply stop working. If I remove the up-skin, hovering over the combo produces a flickering effect, where it appears to apply the style, then remove it immediately. I get the same thing with a button instead of a combo. I'm sure it's something really simple, but it's evading me. <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" xmlns:containers="flexlib.containers.*"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; @namespace mx "library://ns.adobe.com/flex/mx"; #myCombo { over-skin: ClassReference("nmx.MyComboSkin"); up-skin: ClassReference("nmx.MyComboSkin"); down-skin: ClassReference("nmx.MyComboSkin"); } </fx:Style> <fx:Script> <![CDATA[ [Bindable] public var items:Array = ["A","B","C"]; ]]> </fx:Script> <mx:Canvas backgroundColor="#ff0000" width="726" height="165" x="20" y="41"> <mx:ComboBox id="myCombo" x="10" y="10" prompt="Hospital" dataProvider="{items}"> </mx:ComboBox> </mx:Canvas> </s:Application> Skin Definition: package nmx { import flash.display.GradientType; import flash.display.Graphics; import mx.skins.Border; import mx.skins.ProgrammaticSkin; import mx.skins.halo.ComboBoxArrowSkin; import mx.skins.halo.HaloColors; import mx.utils.ColorUtil; public class MyComboSkin extends ProgrammaticSkin { public function MyComboSkin() { super(); } override protected function updateDisplayList(w:Number, h:Number):void { trace(name); super.updateDisplayList(w, h); var arrowColor:int = 0xffffff; var g:Graphics = graphics; g.clear(); // Draw the border and fill. switch (name) { case "upSkin": case "editableUpSkin": { g.moveTo(0,0); g.lineStyle(1,arrowColor); g.lineTo(w-1,0); g.lineTo(w-1,h-1); g.lineTo(0,h-1); g.lineTo(0,0); } break; case "overSkin": case "editableOverSkin": case "downSkin": case "editableDownSkin": { // border /*drawRoundRect( 0, 0, w, h, cr, [ themeColor, themeColor ], 1); */ g.moveTo(0,0); g.lineStyle(1,arrowColor); g.lineTo(w-1,0); g.lineTo(w-1,h-1); g.lineTo(0,h-1); g.lineTo(0,0); // Draw the triangle. g.beginFill(arrowColor); g.moveTo(w - 11.5, h / 2 + 3); g.lineTo(w - 15, h / 2 - 2); g.lineTo(w - 8, h / 2 - 2); g.lineTo(w - 11.5, h / 2 + 3); g.endFill(); } break; case "disabledSkin": case "editableDisabledSkin": { break; } } } } }

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • How to goup EC2 instances in order to delegate administrations to differents teams?

    - by Olivier
    Is it possible (using ARN) to make severals groups of instances. Then using differents policy to grant some access to a group of instance only and not the other instances? For example : { "Statement": [ { "Action": "ec2:*", "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": "elasticloadbalancing:*", "Resource": "*" }, { "Effect": "Allow", "Action": "cloudwatch:*", "Resource": "*" }, { "Effect": "Allow", "Action": "autoscaling:*", "Resource": "*" } ] } Instead of "*" could we use a group or something like that? like a specific subnet? a Tag? or whatever... Thanks for your help

    Read the article

  • 2 Shaders using the same vertex data

    - by Fonix
    So im having problems rendering using 2 different shaders. Im currently rendering shapes that represent dice, what i want is if the dice is selected by the user, it draws an outline by drawing the dice completely red and slightly scaled up, then render the proper dice over it. At the moment some of the dice, for some reason, render the wrong dice for the outline, but the right one for the proper foreground dice. Im wondering if they aren't getting their vertex data mixed up somehow. Im not sure if doing something like this is even allowed in openGL: glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, numVertices*sizeof(GLfloat), vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(effect->vertCoord); glVertexAttribPointer(effect->vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(effect->toon_vertCoord); glVertexAttribPointer(effect->toon_vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0); im trying to bind the vertex data to 2 different shaders here when i load my first shader i have: vertCoord = glGetAttribLocation(TexAndLighting, "position"); and the other shader has: toon_vertCoord = glGetAttribLocation(Toon, "position"); if I use the shaders independently of each other they work fine, but when i try to render both one on top of the other they get the model mixed up some times. here is how my draw function looks: - (void) draw { [EAGLContext setCurrentContext:context]; glBindVertexArrayOES(_vertexArray); effect->modelViewMatrix = mvm; effect->numberColour = GLKVector4Make(numbers[colorSelected].r, numbers[colorSelected].g, numbers[colorSelected].b, 1); effect->faceColour = GLKVector4Make(faceColors[colorSelected].r, faceColors[colorSelected].g, faceColors[colorSelected].b, 1); if(selected){ [effect drawOutline]; //this function prepares the shader glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0); } [effect prepareToDraw]; //same with this one glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0); } this is what it looks like, as you can see most of the outlines are using the wrong dice, or none at all: links to full code: http://pastebin.com/yDKb3wrD Dice.mm //rendering stuff http://pastebin.com/eBK0pzrK Effects.mm //shader stuff http://pastebin.com/5LtDAk8J //my shaders, shouldn't be anything to do with them though TL;DR: trying to use 2 different shaders that use the same vertex data, but its getting the models mixed up when rendering using both at the same time, well thats what i think is going wrong, quite stumped actually.

    Read the article

  • To create an inbox like Gmail in jQuery, would something like the Accordion effect work for opening

    - by snoopy
    I've been looking for a pre-built jQuery component that I could use to build an inbox for a messaging application: http://jqueryui.com/demos/accordion/ I was thinking that if I could close all of the elements in the Accordion, that would be equivalent to showing a list of unopened messages when the user first logs into their Inbox. And then, by clicking on the leaves of the Accordion, they can see each message one by one. Is this a good idea or is there some other jQuery plugin that would work better for this type of functionality?

    Read the article

  • Android: how to hide and then show View with animation effect?

    - by tomash
    I have similar question like this one: http://stackoverflow.com/questions/2079074/update-layout-with-the-animation Basically: I have one vertical LinearLayout View with edittext, button and then list. I'd like to hide exittext after pressing button to make more space for list (button will go up). On second press edittext should be visible again. Edittext and button have "wrap_content" height. I'd like to hide and show edittext with animation. I succeeded to animate hiding by overloading Animation's applyTransformation: final float edittextheight= edittext.getHeight(); [....] @Override protected void applyTransformation(float interpolatedTime, Transformation t) { super.applyTransformation(interpolatedTime, t); android.view.ViewGroup.LayoutParams lp = edittext.getLayoutParams(); lp.height = (int)(edittextheight*(1.0-interpolatedTime)); edittext.setLayoutParams(lp); } Problem: I don't know how to calculate height to animate showing - edittext.getHeight(); returns 0 when widget is hidden and in layout definition I'm using "wrap_content". Help?

    Read the article

  • Add Artistic Effects to Your Pictures in Office 2010

    - by DigitalGeekery
    Do you ever wish you could add cool effects to images in your Office document pictures, but don’t have access to a graphics editor? Today we take a look at the Artistic Effects featire which is a new feature in Office 2010. Note: We will show you examples in Excel, but the Artistic Effect are available in Word, Excel, and PowerPoint. To insert a picture into your Office document, click the Picture button on the Insert tab. Once you import your picture, the Picture Tools format ribbon should be active. If not, click on the image.     In the Adjust group, click on Artistic Effects. You will see a selection of effects previews images in the dropdown list. Hover your cursor over the effects to use Live Preview to see what your picture will look like if that effect is applied.   When you find an effect you like, just click to apply it to the image. There are also some additional Artistic Effect Options. Each effect will have a it’s own set of available options that can be adjusted by moving the sliders left or right. If you find you want to undo an effect after it has been applied, simply select the None option from the previews under Artistic Effects. Conclusion Artistic Effects provides a really easy way to add professional looking effects to images in Office 2010 without the need to access graphics editing software. Check out some of our other Office 2010 articles like how to use advanced font ligatures, add video from the web to PowerPoint 2010, and preview before you paste in Office 2010. Similar Articles Productive Geek Tips Add Effects To Your Pictures in Word 2007Center Pictures and Other Objects in Office 2007 & 2010Tools to Help Post Content On Your WordPress BlogAdd Classic Polaroid Look to Your Digital picturesGive Your Desktop Artistic Flair with FotoSketcher TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox)

    Read the article

  • How add fog with pixel shader (HLSL) XNA?

    - by Mehdi Bugnard
    I started to make a small game in XNA . And recently i tried to add a "fog" on "pixel shader HLSL" with the class Effect from XNA . I search online about some tutorial and found many sample. But nothing want work on my game :-( . Before i already add a "fog" effect in my game and everything work, because i used the class "BasicEffect" but with the class "Effect" and HLSL, it's really more complicated. If somebody have an idea, it's will be wonderfull. Thanks again. Here is my code HLSL, i use. // Both techniques share this same pixel shader. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { //return tex2D(Sampler, input.TextureCoordinate) * input.Color; float d = length(input.TextureCoordinate - cameraPos); float l = saturate((d-fogNear)/(fogFar-fogNear)); float fogFactory = clamp((d - fogNear) / (fogFar - fogNear), 0, 1) * l; return tex2D(Sampler, input.TextureCoordinate) * lerp(input.Color, fogColor, fogFactory); } Here is the screenShot With effect Without effect

    Read the article

  • How can I compile SM 3.0 effects in D3D11 in slimdx?

    - by jacker
    var bytecode = ShaderBytecode.CompileFromFile("shaders\\testShader.fx", "fx_5_0", ShaderFlags.None, SlimDX.D3DCompiler.EffectFlags.None, null, null, out str); var effect = new SlimDX.Direct3D11.Effect(gpu.Device, bytecode); Works fine but if I try to use another shader model like 4.0 or 3.0 it throws an error on the new effect creation: E_FAIL: An undetermined error occurred (-2147467259) How do I compile older shaders? And I've read about device context but I can't find any information on how to use them to maintain DX9 compatibility.

    Read the article

  • Is there a way to effect user defined data types in MySQL?

    - by Dancrumb
    I have a database which stores (among other things), the following pieces of information: Hardware IDs BIGINTs Storage Capacities BIGINTs Hardware Names VARCHARs World Wide Port Names VARCHARs I'd like to be able to capture a more refined definition of these datatypes. For instance, the hardware IDs have no numerical significance, so I don't care how they are formatted when displayed. The Storage Capacities, however, are cardinal numbers and, at a user's request, I'd like to present them with thousands and decimal separators, e.g. 123,456.789. Thus, I'd like to refine BIGINT into, say ID_NUMBER and CARDINAL. The same with Hardware Names, which are simple text and WWPNs, which are hexstrings, e.g. 24:68:AC:E0. Thus, I'd like to refine VARCHAR into ENGLISH_WORD and HEXSTRING. The specific datatypes I made up are just for illustrative purposes. I'd like to keep all this information in one place and I'm wondering if anybody knows of a good way to hold this all in my MySQL table definitions. I could use the Comment field of the table definition, but that smells fishy to me. One approach would be to define the data structure elsewhere and use that definition to generate my CREATE TABLEs, but that would be a major rework of the code that I currently have, so I'm looking for alternatives. Any suggestions? The application language in use is Perl, if that helps.

    Read the article

  • Setting effects variables in XNA

    - by Badescu Alexandru
    Hello ! I am currently reading a book named "3D Graphics with XNA Game Studio 4.0" by Sean James and have some questions to ask : If i create a effect parameter named lets say SpecularPower and have in my effect a variable named SpecularPower , if i do something like effect.Parameters["SpecularPower"].SetValue(3) That wil change the SpecularPower variable in my effect ? And a second question, not regarding the book : If i have a spaceship and i've created a "boost" functionality that speeds up my spaceship, what effects should i implement to create the impresion oh high speed ? I was thinking of making everything except my spaceship blurry but i think there would be something missing . Any ideas ? Regards, Alex Badescu

    Read the article

  • Simple iOS glDrawElements - BAD_ACCESS

    - by user699215
    You can copy paste this into the default OpenGl template created in Xcode. Why am I not seeing anything :-) It is strange as the glDrawArrays(GL_TRIANGLES, 0, 3); is working fine, but with glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); Is giving BAD_ACCESS? Copy paste this into Xcode default OpenGl template: ViewController #import "ViewController.h" #define BUFFER_OFFSET(i) ((char *)NULL + (i)) // Uniform index. enum { UNIFORM_MODELVIEWPROJECTION_MATRIX, UNIFORM_NORMAL_MATRIX, NUM_UNIFORMS }; GLint uniforms[NUM_UNIFORMS]; // Attribute index. enum { ATTRIB_VERTEX, ATTRIB_NORMAL, NUM_ATTRIBUTES }; @interface ViewController () { GLKMatrix4 _modelViewProjectionMatrix; GLKMatrix3 _normalMatrix; float _rotation; GLuint _vertexArray; GLuint _vertexBuffer; NSArray* arrayOfVertex; } @property (strong, nonatomic) EAGLContext *context; @property (strong, nonatomic) GLKBaseEffect *effect; - (void)setupGL; - (void)tearDownGL; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; GLKView *view = (GLKView *)self.view; view.context = self.context; view.drawableDepthFormat = GLKViewDrawableDepthFormat24; [self setupGL]; } - (void)dealloc { [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; if ([self isViewLoaded] && ([[self view] window] == nil)) { self.view = nil; [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } self.context = nil; } // Dispose of any resources that can be recreated. } GLuint vertexBufferID; GLuint indexBufferID; static const GLfloat vertices[9] = { -0.5, -0.5, 0.5, 0.5, -0.5, 0.5, -0.5, 0.5, 0.5 }; static const GLubyte indices[3] = { 0, 1, 2 }; - (void)setupGL { [EAGLContext setCurrentContext:self.context]; // [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); // glGenVertexArraysOES(1, &_vertexArray); // glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &vertexBufferID); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glGenBuffers(1, &indexBufferID); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, // Specifies the index of the generic vertex attribute to be modified. 3, // Specifies the number of components per generic vertex attribute. Must be 1, 2, 3, 4. GL_FLOAT, // GL_FALSE, // 0, // BUFFER_OFFSET(0)); // // glBindVertexArrayOES(0); } - (void)tearDownGL { [EAGLContext setCurrentContext:self.context]; glDeleteBuffers(1, &_vertexBuffer); glDeleteVertexArraysOES(1, &_vertexArray); self.effect = nil; } #pragma mark - GLKView and GLKViewController delegate methods - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } int i; - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect { glClearColor(0.65f, 0.65f, 0.65f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // glBindVertexArrayOES(_vertexArray); // Render the object with GLKit [self.effect prepareToDraw]; //glDrawArrays(GL_TRIANGLES, 0, 3); // Render the object again with ES2 // glDrawArrays(GL_TRIANGLES, 0, 3); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } @end

    Read the article

  • How can i use a duration setting on .animate if it is inside the callback from a .fadeOut effect?

    - by Jannis
    I am trying to just fade the content of section#secondary out and once the content has been faded out, the parent (section#secondary) should animate 'shut' in a slider animation. All this is working however the durations are not and I cannot figure out why. Here is my code: HTML <section id="secondary"> <a href="#" class="slide_button">&laquo;</a> <!-- slide in/back button --> <article> <h1>photos</h1> <div class="album_nav"><a href="#">photo 1 of 6</a> | <a href="#">create an album</a></div> <div class="bar"> <p class="section_title">current image title</p> </div> <section class="image"> <div class="links"> <a class="_back album_link" href="#">« from the album: james new toy</a> <nav> <a href="#" class="_back small_button">back</a> <a href="#" class="_next small_button">next</a> </nav> </div> <img src="http://localhost/~jannis/3781_doggie_wonderland/www/preview/static/images/sample-image-enlarged.jpg" width="418" height="280" alt="" /> </section> </article> <footer> <embed src="http://localhost/~jannis/3781_doggie_wonderland/www/preview/static/flash/secondary-footer.swf" wmode="transparent" width="495" height="115" type="application/x-shockwave-flash" /> </footer> </section> <!-- close secondary --> jQuery // ============================= // = Close button (slide away) = // ============================= $('a.slide_button').click(function() { $(this).closest('section#secondary').children('*').fadeOut('slow', function() { $('section#secondary').animate({'width':'0'}, 3000); }); }); Because the content of section#secondary is variable I use the * selector. What happens is that the fadeOut uses the appropriate slow speed but as soon as the callback fires (once the content is faded out) the section#secondary animates to width: 0 within a couple of milliseconds and not the 3000 ms ( 3 sec ) I set the animation duration to. Any ideas would be appreciated. PS: I cannot post an example at this point but since this is more a matter of theory of jQuery I don't think an example is necessary here. Correct me if I am wrong..

    Read the article

  • How do you handle descriptive database table names and their effect on foreign key names?

    - by Carvell Fenton
    Hello, I am working on a database schema, and am trying to make some decisions about table names. I like at least somewhat descriptive names, but then when I use suggested foreign key naming conventions, the result seems to get ridiculous. Consider this example: Suppose I have table session_subject_mark_item_info And it has a foreign key that references sessionSubjectID in the session_subjects table. Now when I create the foreign key name based on fk_[referencing_table]__[referenced_table]_[field_name] I end up with this maddness: fk_session_subject_mark_item_info__session_subjects_sessionSubjectID Would this type of a foreign key name cause me problems down the road, or is it quite common to see this? Also, how do the more experienced database designers out there handle the conflict between descriptive naming for readability vs. the long names that result? I am using MySQL and MySQL Workbench if that makes any difference. Thanks!

    Read the article

  • Flex: Why does setting scaleX/Y in mxml effect the components size but setting it in actionscript do

    - by ChrisInCambo
    Hi, I'm playing around with the scaleX/Y in the canvas tag and have noticed some strange behaviour. When I set scale in in mxml the width and height of the canvas are adjusted accordingly. For example if I have a canvas like this: <mx:Canvas width="1000" height="1000" scaleX="0.1" scaleY="0.1" /> The canvas now appears on screen to have a width and height of 100 and if inside my creationComplete callback I check the width and height property they are indeed 100. But if I do exactly the same thing except I set the scaleX/Y property from actionscript the canvas on screen appears to have a width and height of 100 as expected, but when I check the width and height property of the canvas they are still at the previous values of 1000. Could anyone help me understand what is going on and also tell me if there is any method that will refresh the width and height values so that they are correct? Thanks, Chris

    Read the article

  • side effect gotchas in python/numpy? horror stories and narrow escapes wanted

    - by shabbychef
    I am considering moving from Matlab to Python/numpy for data analysis and numerical simulations. I have used Matlab (and SML-NJ) for years, and am very comfortable in the functional environment without side effects (barring I/O), but am a little reluctant about the side effects in Python. Can people share their favorite gotchas regarding side effects, and if possible, how they got around them? As an example, I was a bit surprised when I tried the following code in Python: lofls = [[]] * 4 #an accident waiting to happen! lofls[0].append(7) #not what I was expecting... print lofls #gives [[7], [7], [7], [7]] #instead, I should have done this (I think) lofls = [[] for x in range(4)] lofls[0].append(7) #only appends to the first list print lofls #gives [[7], [], [], []] thanks in advance

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >