Search Results

Search found 2173 results on 87 pages for 'alpha transparency'.

Page 68/87 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Zooming in isometric engine using XNA

    - by Yheeky
    I´m currently working on an isometric game engine and right now I´m looking for help concerning my zoom function. On my tilemap there are several objects, some of them are selectable. When a house (texture size 128 x 256) is placed on the map I create an array containing all pixels (= 32768 pixels). Therefore each pixel has an alpha value I check if the value is bigger than 200 so it seems to be a pixel which belongs to the building. So if the mouse cursor is on this pixel the building will be selected - PixelCollision. Now I´ve already implemented my zooming function which works quite well. I use a scale variable which will change my calculation on drawing all map items. What I´m looking for right now is a precise way to find out if a zoomed out/in house is selected. My formula works for values like 0,5 (zoomed out) or 2 (zoomed in) but not for in between. Here is the code I use for the pixel index: var pixelIndex = (int)(((yPos / (Scale * Scale)) * width) + (xPos / Scale) + 1); Example: Let´s assume my mouse is over pixel coordinate 38/222 on the original house texture. Using the code above we get the following pixel index. var pixelIndex = ((222 / (1 * 1)) * 128) + (38 / 1) + 1; = (222 * 128) + 39 = 28416 + 39 = 28455 If we now zoom out to scale 0,5, the texture size will change to 64 x 128 and the amount of pixels will decrease from 32768 to 8192. Of course also our mouse point changes by the scale to 19/111. The formula makes it easy to calculate the original pixelIndex using our new coordinates: var pixelIndex = ((111 / (0.5 * 0.5)) * 64) + (19 / 0.5) + 1; = (444 * 64) + 39 = 28416 + 39 = 28455 But now comes the problem. If I zoom out just to scale 0.75 it does not work any more. The pixel amount changes from 32768 to 18432 pixels since texture size is 96 x 192. Mouse point is transformed to point 28/166. The formula gives me a wrong pixelIndex. var pixelIndex = ((166 / (0.75 * 0.75)) * 96) + (28 / 0.75) + 1; = (295.11 * 96) + 38.33 = 28330.66 + 38.33 = 28369 Does anyone have a clue what´s wrong in my code? Must be the first part (28330.66) which causes the calculation problem. Thanks! Yheeky

    Read the article

  • preview form using javascript in popup

    - by user1015309
    please I need some help in previewing a form in popup. I have a form, quite big, so I added the option of preview to show as popup. The lightbox form popup works well, but the problem I now have is function passform ()passing the inputs(textfield, select, checkbox, radio) into the popup page for preview on Click(). Below are my javascript and html codes. I left the css and some html out, because I think they're not needed. I will appreciate your help. Thank you The Javascript function gradient(id, level) { var box = document.getElementById(id); box.style.opacity = level; box.style.MozOpacity = level; box.style.KhtmlOpacity = level; box.style.filter = "alpha(opacity=" + level * 100 + ")"; box.style.display="block"; return; } function fadein(id) { var level = 0; while(level <= 1) { setTimeout( "gradient('" + id + "'," + level + ")", (level* 1000) + 10); level += 0.01; } } // Open the lightbox function openbox(formtitle, fadin) { var box = document.getElementById('box'); document.getElementById('shadowing').style.display='block'; var btitle = document.getElementById('boxtitle'); btitle.innerHTML = formtitle; if(fadin) { gradient("box", 0); fadein("box"); } else { box.style.display='block'; } } // Close the lightbox function closebox() { document.getElementById('box').style.display='none'; document.getElementById('shadowing').style.display='none'; } //pass form fields into variables var divexugsotherugsexams1 = document.getElementById('divexugsotherugsexams1'); var exugsotherugsexams1 = document.form4.exugsotherugsexams1.value; function passform() { divexugsotherugsexams1.innerHTML = document.form4.exugsotherugsexams1.value; } The HTML(with just one text field try): <p><input name="submit4" type="submit" class="button2" id="submit4" value="Preview Note" onClick="openbox('Preview Note', 1)"/> </p> <div id="shadowing"></div> <div id="box"> <span id="boxtitle"></span> <div id="divexugsotherugsexams1"></div> <script>document.write('<PARAM name="SRC" VALUE="'+exugsotherugsexams1+'">')</script> <a href="#" onClick="closebox()">Close</a> </div>

    Read the article

  • Best way to determine surface normal for a group of pixels?

    - by Paul Renton
    One of my current endeavors is creating a 2D destructible terrain engine for iOS Cocos2D (See https://github.com/crebstar/PWNDestructibleTerrain ). It is in an infant stages no doubt, but I have made significant progress since starting a couple weeks ago. However, I have run into a bit of a performance road block with calculating surface normals. Note: For my destructible terrain engine, an alpha of 0 is considered to not be solid ground. The method posted below works just great given small rectangles, such as n < 30. Anything above 30 causes a dip in the frame rate. If you approach 100x100 then you might as well read a book while the sprite attempts to traverse the terrain. At the moment this is the best I can come up with for altering the angle on a sprite as it roams across terrain (to get the angle for a sprite's orientation just take dot product of 100 * normal * (1,0) vector). -(CGPoint)getAverageSurfaceNormalAt:(CGPoint)pt withRect:(CGRect)area { float avgX = 0; float avgY = 0; ccColor4B color = ccc4(0, 0, 0, 0); CGPoint normal; float len; for (int w = area.size.width; w >= -area.size.width; w--) { for (int h = area.size.height; h >= -area.size.height; h--) { CGPoint pixPt = ccp(w + pt.x, h + pt.y); if ([self pixelAt:pixPt colorCache:&color]) { if (color.a != 0) { avgX -= w; avgY -= h; } // end inner if } // end outer if } // end inner for } // end outer for len = sqrtf(avgX * avgX + avgY * avgY); if (len == 0) { normal = ccp(avgX, avgY); } else { normal = ccp(avgX/len, avgY/len); } // end if return normal; } // end get My problem is I have sprites that require larger rectangles in order for their movement to look realistic. I considered doing a cache of all surface normals, but this lead to issues of knowing when to recalculate the surface normals and these calculations also being quite expensive (also how large should the blocks be?). Another smaller issue is I don't know how to properly treat the case when length is = 0. So I am stuck... Any advice from the community would be greatly appreciated! Is my method the best possible one? Or should I rethink the algorithm? I am new to game development and always looking to learn new tips and tricks.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • Unconvert Text File from Binary Format

    - by Hammer Bro.
    I've got a rather large CSV file (~700MB) which I know to consist of lines of 27-character alpha-numeric hashes; no commas or anything fancy. Somehow, during its migration from Windows to Linux (via winSCP and then a few regular SCPs), it has converted into some kind of binary format I am unfamiliar with. If I open the file in vi, everything appears fine, and it says [converted] at the bottom, although I know it's not a line endings issue (and dos2unix doesn't help). If I 'head' the file, it looks proper except for a "ÿþ" at the beginning of the first line. If I open up the file in nano, however, I see the "ÿþ" at the start and then "^@" before every character (even newlines and EoF). If I try to re-save or copy the file (say via: head file.csv short.txt), this special encoding is preserved. I copied the first ten lines out of vi (which displays it properly) into my Windows clipboard via my SSH client, then pasted it into a new text file, test.txt. This file is visually identical when opened in vi (and similar through 'head', minus the "ÿþ"), although it's roughly half of the filesize. Additionally, file test.txt test.txt: ASCII text file short.txt short.txt: I have no idea what format this once-text file got converted to (it's notoriously hard to search the internet for symbols), but surely there must be some way to convert it back. Any ideas?

    Read the article

  • Create SAMBA node trust relationship to Windows 2003 PDC server

    - by Rod Regier
    I am having problems creating a trust relationship between an OpenVMS/IA64 node running V/IA64 8.3-1H1, TCPIP 5.6 ECO 5, CIFS 1.1 ECO1 PS11 (SAMBA 3.0.28a) and Windows 2003 server running as a PDC. I do have two other OpenVMS/Alpha nodes running V/A 8.3, TCPIP 5.6 ECO 4, CIS 1.1 ECO1 PS10 (SAMBA 3.0.28a) with working trust relationships to the same Windows 2003 server. Looking for assistance in resolving the trust "handshake". \\ Details from failing node. Unless otherwise noted, corresponding files on working nodes are similar or identical. SMB.CONF extract: [global] server string = Samba %v running on %h (OpenVMS) workgroup = WILMA netbios name = %h security = DOMAIN encrypt passwords = Yes name resolve order = lmhosts host wins bcast Password server = * log file = /samba$log/log.%m printcap name = /sys$manager/ucx$printcap.dat guest account = DYMAX print command = print %f/queue=%p/delete/passall/name="""""%s""""" lprm command = delete/entry=%j map archive = No printing = OpenVMS net rpc testjoin [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.RPC_CLIENT]CLI_PIPE.C;1:(2443) get_schannel_session_key: could not fetch trust account password for domain 'WILMA' [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(72) net_rpc_join_ok: failed to get schannel session key from server W2K3AD2 for domain WILMA. Error was NT_STATUS_CANT_ACCESS_DOMAIN_I NFO Join to domain 'WILMA' is not valid net rpc join "-Uaccount%password" tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC.C;1:(322) error storing domain sid for WILMA tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(409) error storing domain sid for WILMA Unable to join domain WILMA. \\ Example from other node: net rpc testjoin Join to 'WILMA' is OK

    Read the article

  • Replacing latex with unicode symbols

    - by Elazar Leibovich
    Often, during a conversation or an email, or at a forum, I would like to type some math, but I don't need full equation support. Unicode symbols should suffice. What I need is an easy way to type math related unicode symbols. Since I already know latex, it makes sense to use the latex symbol mnemonics to type the math symbols. What I currently did is to write an AutoHotKey script which automatically replaces \latexSymbol with the corresponding unicode symbol, using the "hotstrings" AutoHotKey feautres. However, the AutoHotKey hotstrings proved unstable for many strings. Having a couple of tens lines would cause AHK to fail recognizing the strings from time to time. Any other solution? (No, Alt+unicode number isn't convenient enough) Attached is my AHK script. The PutUni function is taken from here. ::\infty:: PutUni("e2889e") return ::\sum:: PutUni("e28891") return ::\int:: PutUni("e288ab") return ::\pm:: PutUni("c2b1") return ::\alpha:: PutUni("c991") return ::\beta:: PutUni("c992") return ::\phi:: PutUni("c9b8") return ::\delta:: PutUni("ceb4") return ::\pi:: PutUni("cf80") return ::\omega:: PutUni("cf89") return ::\in:: PutUni("e28888") return ::\notin:: PutUni("e28889") return ::\iff:: PutUni("e28794") return ::\leq:: PutUni("e289a4") return ::\geq:: PutUni("e289a5") return ::\sqrt:: PutUni("e2889a") return ::\neq:: PutUni("e289a0") return ::\subset:: PutUni("e28a82") return ::\nsubset:: PutUni("e28a84") return ::\nsubseteq:: PutUni("e28a88") return ::\subseteq:: PutUni("e28a86") return ::\prod:: PutUni("e2888f") return ::\N:: PutUni("e28495") return

    Read the article

  • I must clear my cmos to be able to boot

    - by Fredou
    I have this Asus p7p55d-e pro for about 8 months(got it last July) and for this last 3-4 days I cannot boot without clearing my CMOS what I have is: Seasonic M12D 750W ASUS P7P55D-E Pro Intel Core i5 760 Quad Core Processor Lynnfield LGA1156 XFX GeForce® 8800 GT Alpha Dog 512MB DDR3 Standard (PV-T88P-YDF4) 2x Corsair XMS3 CMX4GX3M2A1600C7 4GB DDR3 2X2GB DDR3-1600 CL 7-8-7-20 I tried to remove all the unnecessary stuff: HD/dvd/pci card/usb cable/etc I tried with only 1 dimm filled, instead of my 4, each one individually it didn't work I tried changing the battery, here goes a few dollars to nowhere, didn't work if I don't reset the CMOS it sometime stock on RAM led, sometime on BOOT DEVICE led, when this happen, it stuck on CPU speed detection when I boot right after the reset, i MUST click on the F2 option (boot with default bios setting) if i go into the bios and save/restart, i have to reset it again when booted, everything is rock solid stable, tried memtest, cpu stress, etc, etc. without issue what should be my next step? trying a new psu? (i need to find one..) doing rma? (i need this mb since it's my only computer...) something else?

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • Quick, Linux-compatible unit-aware calculator

    - by endolith
    I want to be able to press a keyboard combination, start typing a mathematical expression that includes units and slightly advanced math (not just a four-function calculator), and get a result immediately, in units that I specify, that I can copy and paste. Currently I open Firefox and press Ctrl+K, type in the search box, and it usually gives me a result in the drop-down from Google Calculator. It doesn't always, though, so I press "=" at the end, wait for a result, remove the equals, wait for a result, realize it doesn't understand the way I typed a unit, open the result in a new tab, etc. it sucks. Wolfram Alpha is smarter, but very much slower, and the output is all images, not text, and I don't have a quick widget for it, if such a thing could even exist. GNU units has a ton of units, which is great, and I can define my own units, which is great, but they have to be written in specific, unintuitive ways, it doesn't handle much advanced math, and I'd need to open a terminal, start units, etc. I hate the command line. I wasted a lot of time trying to make front-ends for units in Deskbar and Launchy, but I'm not a real coder and I don't use either of those anymore. Any other solutions or enhancements of these?

    Read the article

  • Windows AD DNS: Event ID 5504

    - by Chris_K
    Two of my AD controllers (both running DNS service) appear to be having a similar issue. Both are throwing lots of events in the DNS events that look like this: Event Type: Information Event Source: DNS Event Category: None Event ID: 5504 Date: 5/24/2010 Time: 11:51:38 AM User: N/A Computer: ALPHA Description: The DNS server encountered an invalid domain name in a packet from 76.74.137.6. The packet will be rejected. The event data contains the DNS packet. That will come with the same event, same time, with a packet from 76.74.137.7 as well. I know this is "Information" not an error, but since it is new and different it bothers me (yes, I fear unexplained change!) Both machines are running Windows 2003 R2 SP2. The DNS servers are not exposed to the internet. Both DNS servers are configured to use OpenDNS for Forwarders. For both servers, this started about a week ago. Any thoughts on: 1) should I be concerned? 2) how can I stop/fix this? To keep it interesting, I have a 3rd AD / DNS box. Same domain, different Active Directory site. Same forwarders, yet doesn't have this issue.

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • Five Things Learned at the BSR Conference in San Francisco on Nov 2nd-4th

    - by Evelyn Neumayr
    The BSR Conference 2011—“Redefining Leadership”—held from Nov 2nd to Nov 4th in San Francisco, with Oracle as one of the main sponsors, saw senior business executives, civil society representatives, and other experts from around the world gathering to share strategies and insights on the future of sustainability. The general conference sessions kicked off on November 2nd with a plenary address by former U.S. Vice President Al Gore. Other sessions were presented by CEOs of the caliber of Carl Bass (Autodesk), Brian Dunn (Best Buy), Carlos Brito (Anheuser-Busch InBev) and Ofra Strauss (Strauss Group). Here are five key highlights from the conference: 1.      The main leadership challenge is integrating sustainability into core business functions and overcoming short-termism. The “BSR GlobeScan State of Sustainable Business Poll 2011” - a survey of nearly 500 business leaders from 300 member companies - shows that 84% of respondents are optimistic that global businesses will embrace CSR/sustainability as part of their core strategies and operations in the next five years but consider integrating sustainability into their core business functions the key challenge. It is still difficult for many companies that are committed to the sustainability agenda to find investors that understand the long-term implications and as Al Gore said “Many companies are given the signal by the investors that it is the short term results that matter and that is a terribly debilitating force in the market.” 2.      Companies are required to address increasing compliance requirements and transparency in their supply chain, especially in relation with conflict minerals legislation and water management. The Dodd-Frank legislation, OECD guidelines, and the upcoming Securities and Exchange Commission (SEC) rules require companies to monitor upstream the sourcing of tin, tantalum, tungsten, and gold, but given the complexity of this issue companies need to collaborate and partner with peer companies in their industry as well as in other industries to understand how to address conflict minerals in their supply chains. The Institute of Public and Environmental Affairs’ (IPE) China Water Pollution Map enables the public to access thousands of environmental quality, discharge, and infraction records released by various government agencies. Empowered with this information, the public has the opportunity to place greater pressure on polluting companies to comply with environmental standards and create solutions to improve their performance. 3.      A new standard for reporting on supply chain greenhouse gas emissions is available. The New “Scope 3” Supply Chain Greenhouse Gas Inventory Standard, released on October 4th 2011, is the only international greenhouse gas emissions standard that accounts for the full lifecycle of a company’s products. It provides a framework for companies to account for indirect emissions outside of energy use, such as transportation, manufacturing, and distribution, and it incorporates both upstream and downstream impacts of a product. With key investors now listing supplier vulnerability to rising energy prices and disruptions of service as a key concern, greenhouse gas (GHG) management isn’t just for leading companies but a necessity for any business. 4.      Environmental, social, and corporate governance (ESG) reporting is becoming increasingly important to investors and other stakeholders. While European investors have traditionally driven the ESG agenda, U.S. investors are increasingly including ESG data in their analyses. This trend will likely increase as stakeholders continue to demand that an ESG lens be applied to their investments. Investors are increasingly looking to partner on sustainability, as they see the benefits of ESG providing significant returns on investment. 5.      Software companies are offering an increasing variety of solutions to help drive changes and measure performance internally, in supply chains, and across peer companies. The significant challenge is how to integrate different software systems to facilitate decision-making based on a holistic understanding of trade-offs. Jon Chorley, Chief Sustainability Officer and Vice President, Supply Chain Management Product Strategy at Oracle was a panelist in the “Trends in Sustainability Software” session and commented that, “How we think about our business decisions really comes down to how we think about cost. And as long as we don’t assign a cost to things that have an environmental impact or social impact, then we make decisions based on incomplete information. If we could include that in the process that determines ‘Is this product profitable? we would then have a much better decision.” For more information on BSR visit www.brs.org. You can also view highlights of the plenary session at http://www.bsr.org/en/bsr-conference/session-summaries/2011. Oracle is proud to be a sponsor of this BSR conference. By Elena Avesani, Principal Product Strategy Manager, Oracle          

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • Replacing latex with unicode symbols

    - by Elazar Leibovich
    Often, during a conversation or an email, or at a forum, I would like to type some math, but I don't need full equation support. Unicode symbols should suffice. What I need is an easy way to type math related unicode symbols. Since I already know latex, it makes sense to use the latex symbol mnemonics to type the math symbols. What I currently did is to write an AutoHotKey script which automatically replaces \latexSymbol with the corresponding unicode symbol, using the "hotstrings" AutoHotKey feautres. However, the AutoHotKey hotstrings proved unstable for many strings. Having a couple of tens lines would cause AHK to fail recognizing the strings from time to time. Any other solution? (No, Alt+unicode number isn't convenient enough) Attached is my AHK script. The PutUni function is taken from here. ::\infty:: PutUni("e2889e") return ::\sum:: PutUni("e28891") return ::\int:: PutUni("e288ab") return ::\pm:: PutUni("c2b1") return ::\alpha:: PutUni("c991") return ::\beta:: PutUni("c992") return ::\phi:: PutUni("c9b8") return ::\delta:: PutUni("ceb4") return ::\pi:: PutUni("cf80") return ::\omega:: PutUni("cf89") return ::\in:: PutUni("e28888") return ::\notin:: PutUni("e28889") return ::\iff:: PutUni("e28794") return ::\leq:: PutUni("e289a4") return ::\geq:: PutUni("e289a5") return ::\sqrt:: PutUni("e2889a") return ::\neq:: PutUni("e289a0") return ::\subset:: PutUni("e28a82") return ::\nsubset:: PutUni("e28a84") return ::\nsubseteq:: PutUni("e28a88") return ::\subseteq:: PutUni("e28a86") return ::\prod:: PutUni("e2888f") return ::\N:: PutUni("e28495") return

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • PASS: Board Q&amp;A at the Summit

    - by Bill Graziano
    The last two years we’ve put the Board in front of the members and taken questions.  We’re going to do that again this year.  It will be in Room 307/308 from 12:15 to 1:30 on Friday. Yes, this time overlaps with the Birds of a Feather Lunch and the start of afternoon sessions – but only partially.  You can attend the Q&A and still get to parts of both of those.  There just isn’t a great time to do this.  Every time overlaps with something. We can’t do it after the last session on Friday.  We can’t fit it between the last session and the evening events on Wednesday or Thursday.  We had some discussion around breakfast time but I didn’t think that was realistic.  This is the least bad time we could come up with. Last year we had 60-70 people attend.  These are the items that were specific things that I could work on: The first question was whether to increase transparency around individual votes of Board members.  We approved this at the Board meeting the following day.  The only caveat was that if the Board is given confidential information as a basis for their vote then we may not be able to disclose individual votes.  Putting a Director in a position where they can’t publicly defend the reason for their vote is a difficult situation.  Thanks Kendal! Can we have a Board member discretionary fund?  As background, I took a couple of people to lunch so we could have a quiet place to talk.  I bought lunch but wasn’t able to expense it back to PASS.  We just don’t have a budget item for things like this.  I think we should.  I would guess the entire Board would like it also.  It was in an earlier version of the budget but came out as part of a cost-cutting move to balance the budget.  I’d like to see it added back in but we’ll have to see. I know there were a comments about the elections.  At this point we had created the Election Review Committee.  I’ve already written at length about this process. Where does IT work go?  PASS started to publish our internal management reports starting in December 2010.  You can find them on our Governance page.  These aren’t filtered at all and include a variety of information about IT projects.  The most recent update had roughly a page of updates related to IT.  Lots of the work was related to Summit and the Orator tool that we use to manage speaker submissions. There were numerous requests that Tina Turner not be repeated.  Done.  I don’t think we’ll do anything quite like that again.  We had a request for a payment plan for Summit.  We looked into this briefly but didn’t take any action.  We didn’t think the effort was worth the small number of people that would use it.  If you disagree, submit this on our Summit Feedback site and get some votes. There were lots of suggestions around the first-timers events – especially from first timers.  You can find all our current activities related to first-timers at the First Timers page on the Summit web site.  Plus links to 34 (!) blog posts on suggestions for first-timers.  And a big THANK YOU to Confio and Red Gate for sponsoring this. I hope you get the chance to attend.  These events are very helpful to me as a Board member.  I like being able to look around the room as comments are being made and see the audience reaction.  It helps me gauge the interest in an idea. I’d also like to direct you to the Summit Feedback site.  You can submit and vote on ideas to make the Summit a better experience.  As of right now we have the suggestions from last year still up.  We may reset these prior to the Summit though.

    Read the article

  • Closing the gap between strategy and execution with Oracle Business Intelligence 11g

    - by manan.goel(at)oracle.com
    Wikipedia defines strategy as a plan of action designed to achieve a particular goal. An example of this is General Electric's acquisitions and divestiture strategy (plan) designed to propel GE to number 1 or 2 place (goal) in every business segment that it operated in. Execution on the other hand can be defined as the actions taken to getting things done. In GE's case execution will be steps followed for mergers/acquisitions or divestiture. Business press has written extensively about the importance of both strategy and execution in achieving desired business objectives. Perhaps the quote from Thomas Edison says it best - "vision without execution is hallucination". Conversely, it can be said that "execution without vision" is well may be "wishful thinking". Research overwhelmingly point towards the wide gap between strategy and execution. According to a published study, 49% of surveyed executives perceive a gap between their organizations' ability to develop and communicate sound strategies and their ability to implement those strategies. Further, of these respondents, 64% don't have full confidence that their companies will be able to close the gap. Having established the severity and importance of the problem let's talk about the reasons for the strategy-execution gap. The common reasons include: -        Lack of clearly defined goals -        Lack of consistent measure of success -        Lack of ownership -        Lack of alignment -        Lack of communication -        Lack of proper execution -        Lack of monitoring       There are multiple approaches to solving the problem including organizational development practices, technology enablement etc. In most cases a combination of approaches is required to achieve the desired result. For the purposes of this discussion, I'll focus on technology.  Imagine an integrated closed loop technology platform that automates the entire management cycle from defining strategy to assigning ownership to communicating goals to achieving alignment to collaboration to taking actions to monitoring progress and achieving mid course corrections. Besides, for best ROI and lowest TCO such a system should also have characteristics like:  Complete -        Full functionality -        Rich end user access Open -        Any data source -        Any business application -        Any technology stack  Integrated -        Common metadata -        Common security -        Common system management From a capabilities perspective the system should provide the following capabilities: Define -        Strategy -        Objectives -        Ownership -        KPI's Communicate -        Pervasive -        Collaborative -        Role based -        Secure Execute -        Integrated -        Intuitive -        Secure -        Ubiquitous Monitor -        Multiple styles and formats -        Exception based -        Push & Pull Having talked about the business problem and outlined the blueprint for a technology solution, let's talk about how Oracle Business Intelligence 11g can help. Oracle Business Intelligence is a comprehensive business intelligence solution for reporting, ad hoc query and analysis, OLAP, dashboards and scorecards. Oracle's best in class BI platform is based on an architecturally integrated technology foundation that provides a unified end user experience and features a Common Enterprise Information Model, with common security, query request generation and optimization, and system management. The BI platform is ·         Complete - meaning it delivers all modes and styles of BI including reporting, ad hoc query and analysis, OLAP, dashboards and scorecards with a rich end user experience that includes visualization, collaboration, alerts and notifications, search and mobile access. ·         Open - meaning the BI platform integrates with any data source, ETL tool, business application, application server, security infrastructure, portal technology as well as any ODBC compliant third party analytical tool. The suite accesses data from multiple heterogeneous sources--including popular relational and multidimensional data sources and major ERP and CRM applications from Oracle and SAP. ·         Integrated - meaning the BI platform is based on an architecturally integrated technology foundation built on an open, standards based service oriented architecture.  The platform features a common enterprise information model, common security model and a common configuration, deployment and systems management framework. To summarize, Oracle Business Intelligence is a comprehensive, integrated BI platform that lets you define strategy, identify objectives, assign ownership, define KPI's, collaborate, take action, monitor, report and do course corrections all form a single interface and a single system. The platform's integrated metadata model and task based design ensures that the entire workflow from defining strategy to execution to monitoring is completely integrated delivering end to end visibility, transparency and agility. Click here to learn more about Oracle BI 11g. 

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • Quad channel memory and compatibility

    - by balteo
    My motherboard has quad channel memory compatibility. There are 8 memory slots in all: 4 slots are black the other 4 slots are white. I currently have 4 memory modules of 1 GB each in the 4 white slots. That leaves me with 4 free memory slots. My question is: can I put 4 memory modules of 2 GB each in the 4 remaining slots or do I have to use modules of 1 GB all over? FYI here is the output of lshw: alpha description: Ordinateur Tour produit: Precision WorkStation 690 *-cpu:0 description: CPU produit: Intel(R) Xeon(R) CPU X5355 @ 2.66GHz *-memory description: Mémoire Système identifiant matériel: 1000 emplacement: Carte mère taille: 4GiB *-bank:0 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 0 numéro de série: 56737501 emplacement: DIMM 1 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:1 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 1 numéro de série: 48115124 emplacement: DIMM 2 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:2 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 2 numéro de série: 48115523 emplacement: DIMM 3 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:3 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 3 numéro de série: 48115424 emplacement: DIMM 4 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:4 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 4 numéro de série: FFFFFFFF emplacement: DIMM 5 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:5 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 5 numéro de série: FFFFFFFF emplacement: DIMM 6 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:6 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 6 numéro de série: FFFFFFFF emplacement: DIMM 7 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:7 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 7 numéro de série: FFFFFFFF emplacement: DIMM 8 bits: 64 bits horloge: 667MHz (1.5ns) *-pci:0 description: Host bridge produit: 5000X Chipset Memory Controller Hub fabriquant: Intel Corporation identifiant matériel: 100 information bus: pci@0000:00:00.0 version: 12 bits: 32 bits horloge: 33MHz

    Read the article

  • Is there a way to change the string format for an existing CSR "Country Code" field from UTF8 to Printable String?

    - by Mike B
    CentOS 5.x The short version: Is there a way to change the encoding format for an existing CSR "Country Code" field from UTF8 to Printable String? The long version: I've got a CSR generated from a product using standard java security providers (jsse/jce). Some of the information in the CSR uses UTF8 Strings (which I understand is the preferred encoding requirement as of December 31, 2003 - RF 3280). The certificate authority I'm submitting the CSR to explicitly requires the Country Code to be specified as a PrintableString. My CSR has it listed as a UTF8 string. I went back to the latest RFC - http://www.ietf.org/rfc/rfc5280.txt. It seems to conflict specifically on countryName. Here's where it gets a little messy... The countryName is part of the relative DN. The relative DN is defined to be of type DirectoryString, which is defined as a choice of teletexString, printableString, universalString, utf8String, or bmpString. It also more specifically defines countryName as being either alpha (upper bound 2 bytes) or numeric (upper bound 3 bytes). Furthermore, in the appendix, it refers to the X520countryName, which is limited to be only a PrintableString of size 2. So, it is clear why it doesn't work. It appears that the certificate authority and Sun/Java do not agree on their interpretation of the requirements for the countryName. Is there anything I can do to modify the CSR to be compatible with the CA?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >