Search Results

Search found 5920 results on 237 pages for 'hand drawn'.

Page 29/237 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Chock-full of Identity Customers at Oracle OpenWorld

    - by Tanu Sood
      Oracle Openworld (OOW) 2012 kicks off this coming Sunday. Oracle OpenWorld is known to bring in Oracle customers, organizations big and small, from all over the world. And, Identity Management is no exception. If you are looking to catch up with Oracle Identity Management customers, hear first-hand about their implementation experiences and discuss industry trends, business drivers, solutions and more at OOW, here are some sessions we recommend you attend: Monday, October 1, 2012 CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Subject matter experts from Kaiser Permanente and SuperValu share the stage with Amit Jasuja, Snior Vice President, Oracle Identity Management and Security to discuss how the latest advances in Identity Management are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. Tuesday, October 2, 2012 CON9437: Mobile Access Management 10:15 a.m. – 11:15 a.m., Moscone West 3022 With more than 5 billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session will feature Identity Management evangelists from companies like Intuit, NetApp and Toyota to discuss how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON9491: Enhancing the End-User Experience with Oracle Identity Governance applications 11:45 a.m. – 12:45 p.m., Moscone West 3008 As organizations seek to encourage more and more user self service, business users are now primary end users for identity management installations.  Join experts from Visa and Oracle as they explore how Oracle Identity Governance solutions deliver complete identity administration and governance solutions with support for emerging requirements like cloud identities and mobile devices. CON9447: Enabling Access for Hundreds of Millions of Users 1:15 p.m. – 2:15 p.m., Moscone West 3008 Dealing with scale problems? Looking to address identity management requirements with million or so users in mind? Then take note of Cisco’s implementation. Join this session to hear first-hand how Cisco tackled identity management and scaled their implementation to bolster security and enforce compliance. CON9465: Next Generation Directory – Oracle Unified Directory 5:00 p.m. – 6:00 p.m., Moscone West 3008 Get the 360 degrees perspective from a solution provider, implementation services partner and the customer in this session to learn how the latest Oracle Unified Directory solutions can help you build a directory infrastructure that is optimized to support cloud, mobile and social networking and yet deliver on scale and performance. Wednesday, October 3, 2012 CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? How do you externalize such entitlements to allow dynamic changes without having to touch the application code? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. Thursday, October 4, 2012 CON9662: Securing Oracle Applications with the Oracle Enterprise Identity Management Platform 2:15 p.m. – 3:15 p.m., Moscone West 3008 Oracle Enterprise identity Management solutions are designed to secure access and simplify compliance to Oracle Applications.  Whether you are an EBS customer looking to upgrade from Oracle Single Sign-on or a Fusion Application customer seeking to leverage the Identity instance as an enterprise security platform, this session with Qualcomm and Oracle will help you understand how to get the most out of your investment. And here’s the complete listing of all the Identity Management sessions at Oracle OpenWorld.

    Read the article

  • A small, intra-app Object to String Serializer

    - by Rick Strahl
    On a few occasions I've needed a very compact serializer for small and simple, flat object serialization, typically for storage in Cookies or a FormsAuthentication ticket in ASP.NET. XML and JSON serialization are too verbose for those scenarios so a simple property serializer that strings together the values was needed. Originally I did this by hand, but here is a class that automates the process.

    Read the article

  • Starting this week: Dublin, Maidenhead, and London

    - by KKline
    This might be most most overcommitted four-week period of time ever in my life. I’m tired just thinking about it! Not only am I traveling internationally and speaking over the next few weeks, I’m also helping on two book projects, learning some new applications from Quest Software, and helping on a small Transact-SQL refactoring project. Swag on hand? I’ve got a special printing of 500 video training DVDs for this trip: SQL Server Training on DMVs Performance Monitor and Wait Events Plus, I’ll have...(read more)

    Read the article

  • Setting up OpenGL camera with off-center perspective

    - by user5484
    Hi, I'm using OpenGL ES (in iOS) and am struggling with setting up a viewport with an off-center distance point. Consider a game where you have a character in the left hand side of the screen, and some controls alpha'd over the left-hand side. The "main" part of the screen is on the right, but you still want to show whats in the view on the left. However when the character moves "forward" you want the character to appear to be going "straight", or "up" on the device, and not heading on an angle to the point that is geographically at the mid-x position in the screen. Here's the jist of how i set my viewport up where it is centered in the middle: // setup the camera // glMatrixMode(GL_PROJECTION); glLoadIdentity(); const GLfloat zNear = 0.1; const GLfloat zFar = 1000.0; const GLfloat fieldOfView = 90.0; // can definitely adjust this to see more/less of the scene GLfloat size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect; rect.origin = CGPointMake(0.0, 0.0); rect.size = CGSizeMake(backingWidth, backingHeight); glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glMatrixMode(GL_MODELVIEW); // rotate the whole scene by the tilt to face down on the dude const float tilt = 0.3f; const float yscale = 0.8f; const float zscale = -4.0f; glTranslatef(0.0, yscale, zscale); const int rotationMinDegree = 0; const int rotationMaxDegree = 180; glRotatef(tilt * (rotationMaxDegree - rotationMinDegree) / 2, 1.0f, 0.0f, 0.0f); glTranslatef(0, -yscale, -zscale); static float b = -25; //0; static float c = 0; // rotate by to face in the direction of the dude float a = RADIANS_TO_DEGREES(-atan2f(-gCamera.orientation.x, -gCamera.orientation.z)); glRotatef(a, 0.0, 1.0, 0.0); // and move to where it is glTranslatef(-gCamera.pos.x, -gCamera.pos.y, -gCamera.pos.z); // draw the rest of the scene ... I've tried a variety of things to make it appear as though "the dude" is off to the right: - do a translate after the frustrum to the x direction - do a rotation after the frustrum about the up/y-axis - move the camera with a biased lean to the left of the dude Nothing i do seems to produce good results, the dude will either look like he's stuck on an angle, or the whole scene will appear tilted. I'm no OpenGL expert, so i'm hoping someone can suggest some ideas or tricks on how to "off-center" these model views in OpenGL. Thanks!

    Read the article

  • SQL University: Parallelism Week - Part 2, Query Processing

    - by Adam Machanic
    Welcome back for the second part of Parallelism Week here at SQL University . Get your pencils ready, and make sure to raise your hand if you have a question. Last time we covered the necessary background material to help you understand how the SQL Server Operating System schedules its many active threads, and the differences between its behavior and that of the Windows operating system's scheduler. We also discussed some of the variations on the theme of parallel processing. Today we'll take a look...(read more)

    Read the article

  • A Plea for Plain English

    - by Tony Davis
    The English language has, within a lifetime, emerged as the ubiquitous 'international language' of scientific, political and technical communication. On the one hand, learning a single, common language, International English, has made it much easier to participate in and adopt new technologies; on the other hand it must be exasperating to have to use English at international conferences, or on community sites, when your own language has a long tradition of scientific and technical usage. It is also hard to master the subtleties of using a foreign language to explain advanced ideas. This requires English speakers to be more considerate in their writing. Even if you’re used to speaking English, you may be brought up short by this sort of verbiage… "Business Intelligence delivering actionable insights is becoming more critical in the enterprise, and these insights require large data volumes for trending and forecasting" It takes some imagination to appreciate the added hassle in working out what it means, when English is a language you only use at work. Try, just to get a vague feel for it, using Google Translate to translate it from English to Chinese and back again. "Providing actionable business intelligence point of view is becoming more and more and more business critical, and requires that these insights and projected trends in large amounts of data" Not easy eh? If you normally use a different language, you will need to pause for thought before finally working out that it really means … "Every Business Intelligence solution must be able to help companies to make decisions. In order to detect current trends, and accurately predict future ones, we need to analyze large volumes of data" Surely, it is simple politeness for English speakers to stop peppering their writing with a twisted vocabulary that renders it inaccessible to everyone else. It isn’t just the problem of writers who use long words to give added dignity to their prose. It is the use of Colloquial English. This changes and evolves at a dizzying rate, adding new terms and idioms almost daily; it is almost a new and separate language. By contrast, ‘International English', is gradually evolving separately, at its own, more sedate, pace. As such, all native English speakers need to make an effort to learn, and use it, switching from casual colloquial patter into a simpler form of communication that can be widely understood by different cultures, even if it gives you less credibility on the street. Simple-Talk is based, at least in part, on the idea that technical articles can be written simply and clearly in a form of English that can be easily understood internationally, and that they can be written, with a little editorial help, by anyone, and read by anyone, regardless of their native language. Cheers, Tony.

    Read the article

  • Recruitment Drive - Things Don't Always Go As Planned - Stay Flexible by Kalyan Neelagiri

    - by david.talamelli
    I am one of the Recruiters for Oracle and work in our India Recruitment Team. When we are hiring for multiple positions we often hold Recruitment Events to interview a large number of people as effectively as possible. These Events are often held on the weekend as many people are not free to attend an all day event during the working week. Just recently during a recruitment campaign we were running I was tasked to set up a Recruitment Event for some roles we were hiring for. I have set up and run weekend recruitment events in the past which have all run smoothly. However, this time arranging this recruitment event was quite a challenge for me. The planned event was taking place on a Saturday. I had almost sent out the confirmed scheduled list of candidates to the respective hiring team on Friday and was on track for the event to take place, but unfortunately there was breaking news in the media that there was a strike called in the city because of some political agitations and protests taking place on the event day. The hiring manager had rushed to me asking for my thoughts and ideas. I was in two minds on what to do. One on hand I was not ready to cancel the event because of all the work that so many people had put into getting this prepared and also I did not want to reschedule the event at the last minute if I did not need to. On the other hand I understood it may be best to reschedule the event as people may not be able to attend based on the political protests taking place on the day. In the end I decided to gather and check for other options because this might cause confusion and a problem for the scheduled candidates to drive in to the venue. So we had concluded to reschedule our event plans and moved the event to the next week. The good news is that we successfully executed this recruitment drive the following Saturday. We were glad that 100% of the candidates we able to make it to the new interview date and despite all the agitations in the city we were successful in hiring people for all the roles we had open. Things do not always go as planned. The best laid plans can sometimes be for nought based on external factors outside of our control. What this experience has taught me is that rather than focus on the negatives when you are thrown a curveball the best approach is to stay flexible and focus on finding ways to reach your outcome. Your plans may need to change but you can still achieve the results you are after if you have the right mind set.

    Read the article

  • Sony Vaio VPCS111FM

    I recently got Sony Vaio VPCS111FM/S and I have to say, this is the best laptop I have ever hand in my life. I highly recommend it. Read more...

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Security aspects of an ASP.NET that can be pointed out to the client

    - by Maxim V. Pavlov
    I need to write several passages of text in an offer to the client about the security layer in ASP.NET MVC web solution. I am aware of security that comes along with MVC 3 and an improvements in MVC 4. But all of them are non conceptual, except for AntiForgeryToken (AntiXSS) and built-in SQL Injection immunity (with a little of encoding needed by hand). What would be the main point of ASP.NET security I can "show off" in an offer to the client?

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • How do I install Open vSwitch?

    - by Lorin Hochstein
    How do I install Open vSwitch on raring? I can't find any official Ubuntu docs on this anywhere. DevStack seems to do this: kernel_version=`cat /proc/version | cut -d " " -f3` apt-get install make fakeroot dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version On the other hand, this blog does this: apt-get install openvswitch-datapath-source openvswitch-common openvswitch-switch

    Read the article

  • Gimp for the kids: Debian Junior Art

    <b>Ghacks:</b> "If you&#8217;ve ever tried your hand at The GIMP, you know that, at first, The GIMP can be a bit challenging to learn. That is coming from an adult. Imagine a younger user attempting to use The GIMP."

    Read the article

  • Google I/O 2012 - Getting Started with Google+ History API [CONF]

    Google I/O 2012 - Getting Started with Google+ History API [CONF] Timothy Jordan, Daniel Dulitz Google+ history presents new opportunities to increase traffic to your site and engagement with your content by allowing users to connect their Google profile to your site. This session will explore the value of Google+ history and review basic implementation. Special guests will be on hand to describe their early success with this new service. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 92 6 ratings Time: 33:56 More in Science & Technology

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • 7 of the Best Free Linux Medical Imaging Software

    <b>LinuxLinks:</b> "Now, let's explore the 7 imaging software at hand. For each title we have compiled its own portal page, a full description with an in-depth analysis of its features, a screenshot of the software in action, together with links to relevant resources and reviews."

    Read the article

  • The First Annual Crappy Code Games

    - by Testas
    SQLBits announced some super-exciting news! A tie-up with our platinum sponsor, Fusion-io. Together we'll be running a series of events called "The Crappy Code Games" where SQL Server developers will compete to write the worst-performing code and win some very cool prizes including:   •        Gold: A hands-on, high performance flying day for two at Ultimate High plus Fusion-io flight jackets•        Silver: One day racing experience at Palmer Sports where you will drive seven different high performance cars•        Bronze: Pure Tech Racing 10 person package at PTR’s F1 racing facility includes FI tees, food and drinks. …plus iPods, Windows Mobile phones, X-box 360s, t-shirts and much more. There will be two qualifying events in Manchester on March 17th and London on March 31st, and the third qualifier as well as the grand finale will be held in the evening of Thursday April 7th at SQLBits. And if that isn’t cool enough, Fusion-io's Chief Scientist Steve Wozniak (yes, that Steve Wozniak, tech industry legend and co-founder of Apple) will be on hand in Brighton to hand out the prizes! If you'd like to take part you'll need to register, and since places are limited we recommend you do so right away. For more details and to register, go to http://www.crappycodegames.com/ The Games: In conjunction with SQL Bits, dbA-thletes (that’s you) will compete  head-to-head in one of three separate qualifying events to be held in Manchester, London and Brighton.  Four separate SQL  rounds make up the evening’s Games, and will challenge you to write code that pushes the boundaries of SQL performance.  The four events are: ?  The High Jump: Generate the highest I/O per second ?  The 100 m dash: Cumulative highest number of I/O’s in 60 seconds ?  The SSIS-athon: Load one billion row fact table in the shortest time ?  The Marathon: Generate the highest MB per second in 60 seconds

    Read the article

  • Character Stats and Power

    - by Stephen Furlani
    I'm making an RPG game system and I'm having a hard time deciding on doing detailed or abstract character statistics. These statistics define the character's natural - not learned - abilities. For example: Mass Effect: 0 (None that I can see) X20 (Xtreme Dungeon Mastery): 1 "STAT" Diablo: 4 "Strength, Magic, Dexterity, Vitality" Pendragon: 5 "SIZ, STR, DEX, CON, APP" Dungeons & Dragons (3.x, 4e): 6 "Str, Dex, Con, Wis, Int, Cha" Fallout 3: 7 "S.P.E.C.I.A.L." RIFTS: 8 "IQ, ME, MA, PS, PP, PE, PB, Spd" Warhammer Fantasy Roleplay (1st ed?): 12-ish "WS, BS, S, T, Ag, Int, WP, Fel, A, Mag, IP, FP" HERO (5th ed): 14 "Str, Dex, Con, Body, Int, Ego, Pre, Com, PD, ED, Spd, Rec, END, STUN" The more stats, the more complex and detailed your character becomes. This comes with a trade-off however, because you usually only have limited resources to describe your character. D&D made this infamous with the whole min/max-ing thing where strong characters were typically not also smart. But also, a character with a high Str typically also has high Con, Defenses, Hit Points/Health. Without high numbers in all those other stats, they might as well not be strong since they wouldn't hold up well in hand-to-hand combat. So things like that force trade-offs within the category of strength. So my original (now rejected) idea was to force players into deciding between offensive and defensive stats: Might / Body Dexterity / Speed Wit / Wisdom Heart Soul But this left some stat's without "opposites" (or opposites that were easily defined). I'm leaning more towards the following: Body (Physical Prowess) Mind (Mental Prowess) Heart (Social Prowess) Soul (Spiritual Prowess) This will define a character with just 4 numbers. Everything else gets based off of these numbers, which means they're pretty important. There won't, however, be ways of describing characters who are fast, but not strong or smart, but absent minded. Instead of defining the character with these numbers, they'll be detailing their character by buying skills and powers like these: Quickness Add a +2 Bonus to Body Rolls when Dodging. for a character that wants to be faster, or the following for a big, tough character Body Building Add a +2 Bonus to Body Rolls when Lifting, Pushing, or Throwing objects. [EDIT - removed subjectiveness] So my actual questions is what are some pitfalls with a small stat list and a large amount of descriptive powers? Is this more difficult to port cross-platform (pen&paper, PC) for example? Are there examples of this being done well/poorly? Thanks,

    Read the article

  • Moving sprites on a graph in libGDX

    - by nosferat
    In my game I'd like to move sprites on a fixed path. Until this point I was trying to stick with the tools already provided by libGDX, like the Tiled map renderer classes so I'm looking for a solution nearly as convenient as that, e.g. I'd like to avoid creating the adjacency matrix by hand. Tiled has the functionality to add objects to the map but I'm not sure if I can use it for this purpose. Any idea?

    Read the article

  • Go to the parent directory in Files/Nautilus Ubuntu 12.10

    - by Piotr Nowicki
    In Ubuntu 12.10 (Gnome3) they've removed the "go to parent directory" using Backspace. I was very used to it... I've seen in source code comments that they've removed this support and there are at least 3 other ways of achieving the same. I wonder - what are other ways besides the Alt + up? Basically, I'd like to find out how to enable the Backspace key to go to the parent directory or at least know the shortcut for doing it with one hand (Alt + up is useless).

    Read the article

  • Microsoft MVP Award

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I learned over Easter that I have been awarded a BizTalk MVP by Microsoft for 2010, which is great news. It's all a bit shadowy, but I suspect Michael Stephenson had a hand in it – so thanks Mike.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >