Search Results

Search found 28486 results on 1140 pages for 'think floyd'.

Page 475/1140 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • C#.NET vs VB.NET, Which language is better?

    Features I cannot say any language good or bad as long as it's compiler can produce MSIL can run under .NET CLR. If someone says C# has more futures, you can understand that those new features are of C# compiler but not .NET, because if C# has a specific future then CLR cannot understand them. So the new features of C# will have to convert to the code understood by CLR eventually. that means the new features are developed for C# compiler basically to facilitates the developer to write their code in better way. so that means no difference in feature list between C# and VB.NET if you think in CLR perspective. Ease of writing Code I feel writing code in C# is easy, because my background is C and C++, Java, syntaxes very are similar. I assume most developers feel the same. Readability But some people say VB.NET code most readable for the members who are from non technical background, because keywords are generally in English rather special charectors. No of Projects in Market I assume 80 percent of market uses C# in their .NET development. for example in my company many projects are there .nET and all are using C#. Productivity & Experience though the feature list is same, generally developers wants to write code in their familiar languages. because it increase the productivity. Hope this helps to choose the language which suits for you. span.fullpost {display:none;}

    Read the article

  • Need some advice regarding collision detection with the sprite changing its width and height

    - by Frank Scott
    So I'm messing around with collision detection in my tile-based game and everything works fine and dandy using this method. However, now I am trying to implement sprite sheets so my character can have a walking and jumping animation. For one, I'd like to to be able to have each frame of variable size, I think. I want collision detection to be accurate and during a jumping animation the sprite's height will be shorter (because of the calves meeting the hamstrings). Again, this also works fine at the moment. I can get the character to animate properly each frame and cycle through animations. The problems arise when the width and height of the character change. Often times its position will be corrected by the collision detection system and the character will be rubber-banded to random parts of the map or even go outside the map bounds. For some reason with the linked collision detection algorithm, when the width or height of the sprite is changed on the fly, the entire algorithm breaks down. The solution I found so far is to have a single width and height of the sprite that remains constant, and only adjust the source rectangle for drawing. However, I'm not sure exactly what to set as the sprite's constant bounding box because it varies so much with the different animations. So now I'm not sure what to do. I'm toying with the idea of pixel-perfect collision detection but I'm not sure if it would really be worth it. Does anyone know how Braid does their collision detection? My game is also a 2D sidescroller and I was quite impressed with how it was handled in that game. Thanks for reading.

    Read the article

  • Outstanding Silverlight User Group Meeting last night

    - by Dave Campbell
    We had a great Silverlight User Group Meeting in Phoenix last night! Before I go any farther I want to say thanks again to David Silverlight and Kim Schmidt for coming to talk to us! And not to forget Victor Gaudioso over the wire :) David, Kim, and Victor talked to us about the Silverlight User Group Starter Kit they are working on with an extended stellar list of talented developers. Don't bypass looking at this by thinking it's only for a User Group... this is a solid community-supported full-up application using MVVM and Ria Services that you could take and modify for your own use. Take a look at the list of developers. Chances are you know some of them... send them an email of thanks for all the hard work over the last year! David and Kim discussed the architecture and code, demonstrating features as they went. Then Victor came in through the application itself on a high-intensity live webcast from his home in California. The audience of about 15 seemed focused and interested which says a lot about the subject and presentation. Tim Heuer came bearing some gifts (swag) ... a hard-copy of Josh Smith's Advanced MVVM , and couple cheaply upgradeable copies of VS2008 Pro that were snatched up very quickly. We also gave away a few copies of Windows 7 Ultimate 64-bit, some Arc mice, and some Office 2007 disks... so I don't think anyone left empty-handed. Personal thanks from me go out to Mike Palermo and Tim Heuer for the surprise they had waiting for me that's been over Twitter, and to Victor for only mentioning it at least 3 times in a 5-minute webcast. Thanks for a great evening, and I look forward to seeing all of you in a couple weeks at MIX10!

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • help! corrupt file recovery

    - by TheBumpper
    My supervisor computer crashed last night, and I'm trying to help him out. He made an R script but when he tried to open it, it was empty. But for some reason the file is 7.9kb so it should not be empty i think... anyway when i tried to open it, Gedit gave this error: "The file you opened has some invalid characters. If you continue editing this file you could corrupt this document. You can also choose another character encoding and try again." and the options to encode the characters. It looked like this(with a red background): \00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\ My question is, is there a way to restore the file? i hope someone has a brilliant idea

    Read the article

  • LAMP Stack Versioning -- Is there a website or version tracker source to help suggest the right versions of each part of a platform stack?

    - by Chris Adragna
    Taken singly, it's easy to research versions and compatibility. Version information is readily available on each single part of a platform stack, such as MySQL. You can find out the latest version, stable version, and sometimes even the percentage of people adopting it by version (personally, I like seeing numbers on adoption rates). However, when trying to find the best possible mix of versions, I have a harder time. For example, "if you're using MySQL 5.5, you'll need PHP version XX or higher." It gets even more difficult to mitigate when you throw higher level platforms into the mix such as Drupal, Joomla, etc. I do consider "wizard" like installers to be beneficial, such as the Bitnami installers. However, I always wonder if those solutions cater more to the least common denominator -- be all to many -- and as such, I think I'd be better to install things on my own. Such solutions do seem kind of slow to adopt new versions, slower than necessary, I suspect. Is there a website or tool that consolidates versioning data in order to help a webmaster choose which versions to deploy or which upgrades to install, in consideration of all the other parts of the stack?

    Read the article

  • Distributed Development Tools -- (Version control and Project Management)

    - by Macy Abbey
    Hello, I've recently become responsible for choosing which source control and project management software to use for a company that employs me. Currently it uses Jira (project management) and Subversion (version control). I know there are many other options out there -- the ones I know about are all in this article http://mashable.com/2010/07/14/distributed-developer-teams/ . I'm leaning towards recommending they just stay with what they have as it seems workable and any change would have to be worth the cost of switching to say github/basecamp or some other solution. Some details on the team: It's a distributed development shop. Meetings of the whole team in one room are rare. It's currently a very small development team (three developers). The project management software is used by developers and a product manager or two. What are you experiences with version control and project management web applications? Are there any you would recommend and you think are worth the switching cost of time to learn new services / implementing the change? Edit: After educating myself further on the options it appears DVCS offer powerful benefits that may be worth investing in now as opposed to later in the company's lifetime when the switching cost is higher: I'm a Subversion geek, why I should consider or not consider Mercurial or Git or any other DVCS?

    Read the article

  • PASS Summit book launch and meet the authors - Professional SQL Server 2012 Internals & Troubleshooting

    - by Christian
    I’m very pleased to announce that we’ll be officially launching our new book, Professional SQL Server 2012 Internals and Troubleshooting at the PASS Summit in Seattle tomorrow. In partnership with our great friends at SQL Sentry we’ll have most of the authors at the SQL Sentry exhibitors stand from 12:30 on Thursday 8th November for a book signing event which will give you a rare opportunity to meet with the authors and contributors, many of which have flown in from around the world. SQL Sentry also have lots and lots of copies to give away for free so be sure to drop by their stand and ask about it! If you really can’t wait or run the risk of not getting a copy then the PASS bookstore has a few copies for sale but don’t expect them to be there for long! You can also order it from your favourite online retailer: amazon.com: http://amzn.to/U9IlPV barnesandnoble.com: http://bitly.com/Ux1gog amazon.co.uk: http://bitly.com/WBJ18l I’ll be writing a follow-up post very soon explaining why I think you should buy this book so look out for it!   Christian Bolton - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services

    Read the article

  • What Is A Software Architect&rsquo;s Job Today?

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/10/25/what-is-a-software-architectrsquos-job-today.aspx It was 2001 when a project manager first put my job title as architect on a statement of work.  A lot has changed over the last twelve years.  The concepts around what an architect is has evolved.  In the early days I would have said that they just rebranded the role of the system analyst.  Now we have a multitude of architect titles: application, solution, IT, data, enterprise.  Whatever the title the goals are the same.  An architect takes the business needs and maps them to the solutions that are needed and at the same time works to ensure the quality of the solution and its maintainability. One of the problems I see these days is that we are expecting every developer to have architect skills.  That in itself is not a problem.  This reduces the need for dedicated architects.  Not every developer though is going to be able to step up to this level.  Some are just good at solving small problems instead of thinking in the larger abstract. Another problem is the accelerating speed and breadth of new technologies and products.  For an architect to be good at his job he needs to spend large amounts of personal time studying just to stay relevant. In the end I don’t think the main objectives of an architect has changed, just the level of commitment needed to stay of value to your company.  Renew your commitment to your profession and keep delivering great solutions. Technorati Tags: software architect,enterprise architect,data architect,solution architect,IT architect,PSC,PSC Group

    Read the article

  • What I&rsquo;m Reading &ndash; 2 &ndash; Microsoft Silverlight 4 Data and Services Cookbook

    - by Dave Campbell
    A while back I mentioned that I had a couple books on my desktop that I’ve been “shooting holes” in … in other words, reading pieces that are interesting at the time, or looking something up rather than starting at the front and heading for the back. The book I want to mention today is Microsoft Silverlight 4 Data and Services Cookbook : by Gill Cleeren and Kevin Dockx. As opposed to the authors of the last book I reviewed, I don’t personally know Gill or Kevin, but I’ve blogged a lot of their articles… both prolific and on-topic writers. The ‘recipe’ style of the book shouldn’t put you off. It’s more of the way the chapters are laid out than anything else and once you see one of them, you recognize the pattern. This is a great eBook to have around to open when you need to find something useful. As with the other PACKT book I talked about have the eBook because for technical material, at least lately, I’ve gravitated toward that. I can have it with me on a USB stick at work, or at home. Read the free chapter then check out their blogs. You may be surprised by some of the items you’ll find inside the covers. One such nugget is one I don’t think I’ve seen blogged:  “Converting You Existing Applications to Use Silverlight”. Another good job! Technorati Tags: Silverlight 4

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • You Can't Win on Price

    - by David Dorf
    This year I did the majority of my Christmas shopping from the comfort of my home office. There aren't many things in stores you can't find online these days. I find it easier to search, research, and compare products online rather than walking the mall anyway. But there's a segment of the population that likes to be in the store, touching the products. For those people, smartphones avail them some of the e-commerce features I mentioned right there in the aisles. First it was RedLaser, then TheFind, ShopSavvy and many others. But the one that should be scaring retailers is Amazon's PriceCheck application. It lets you scan the product barcode, take a picture of the product, or speak the product's name. Once the product is identified, it shows the online prices, with Amazon at the top of the list. Within 10 seconds you can order the item and Amazon Prime members get free 2-day shipping too. I don't think fashion and grocery retailers need to worry much, but I have to believe smartphones are helping Amazon win a little more of the brand-name hardgoods market. So what's a retailer to do? Best Buy has begun to put QR Codes on their shelf labels that are easily scanned by smartphones and take the consumer to a Best Buy Web page where they can get extended information about the product. The consumer is getting the additional information they want, and Best Buy avoids the price comparisons. Of course if a consumer chooses to use the Amazon PriceCheck app, then all bets are off. That's when Best Buy has to hope the in-store experience and customer service will save the sale. My point is that the internet makes information available to everyone, and smartphones make it available anywhere. Unless you want your store to be Amazon's local showroom, you need to be price-competitive but differentiate on other aspects of the shopping experience. With the cost of running a physical store, you can't win on price.

    Read the article

  • Cocos2d-x CCFollow Zooming issue

    - by blakey87
    Hi I am currently building a cocos2d-x game which incorporates pinch zoom using CCLayerPanZoom class which can be found here The problem is basically when using CCFollow and zooming and out, it does'nt zoom on the actually followed node, so the camera appears to zoom towards the bottom left corner of the screen, when I would rather it zoom centrally on the followed node. If I could resolve this I would pretty darn happy. I converted a fix from the cocos2d objective C version in the CCfollow class to cocos2d-x which improved the issue,but if you look at the post in latter link you will see the guy is having the exact same problem, he gave up on fixing it sadly. I think its close but I don't really know what going on, hopefully someone out there has already faced and fixed this problem. My converted code is below. CCPoint p1 = ccpMult(m_obHalfScreenSize, m_pTarget->getScale() ); CCPoint p2 = ccpMult(m_pobFollowedNode->getPosition(), m_pTarget->getScale() ); CCPoint offect = ccpMult(ccpSub(p1, m_obHalfScreenSize), 0.5f); CCPoint tempPos = ccpAdd(ccpSub(p1, p2), offect); m_pTarget->setPosition(ccp(clampf(tempPos.x,m_fLeftBoundary,m_fRightBoundary), clampf(tempPos.y,m_fBottomBoundary,m_fTopBoundary))); I have attached before and after to hopefully make things more clear.

    Read the article

  • How would i down-sample a .wav file then reconstruct it using nyquist? - in matlab [closed]

    - by martin
    This is all done in MatLab 2010 My objective is to show the results of: undersampling, nyquist rate/ oversampling First i need to downsample the .wav file to get an incomplete/ or impartial data stream that i can then reconstuct. Heres the flow chart of what im going to be doing So the flow is analog signal - sampling analog filter - ADC - resample down - resample up - DAC - reconstruction analog filter what needs to be achieved: F= Frequency F(Hz=1/s) E.x. 100Hz = 1000 (Cyc/sec) F(s)= 1/(2f) Example problem: 1000 hz = Highest frequency 1/2(1000hz) = 1/2000 = 5x10(-3) sec/cyc or a sampling rate of 5ms This is my first signal processing project using matlab. what i have so far. % Fs = frequency sampled (44100hz or the sampling frequency of a cd) [test,fs]=wavread('test.wav'); % loads the .wav file left=test(:,1); % Plot of the .wav signal time vs. strength time=(1/44100)*length(left); t=linspace(0,time,length(left)); plot(t,left) xlabel('time (sec)'); ylabel('relative signal strength') **%this is were i would need to sample it at the different frequecys (both above and below and at) nyquist frequency.*I think.*** soundsc(left,fs) % shows the resaultant audio file , which is the same as original ( only at or above nyquist frequency however) Can anyone tell me how to make it better, and how to do the various sampling at different frequencies?

    Read the article

  • You Might Be a SharePoint Professional If&hellip;

    - by Mark Rackley
    I really think no explanation is needed. Hope this makes you smile.. Thanks again for being an awesome SharePoint community! If you can only dream about working an 8 hour day, there’s a good chance you are a SharePoint professional. You might be a SharePoint professional if the last time you heard “Old MacDonald Had a Farm” you wondered “How many web front ends does it have?” If you consider Twitter the best form of support since the dawn of the Internet, you might be a SharePoint professional. If you are giddy-as-a-school-girl excited about going to Anaheim in October and it has NOTHING to do with Disneyland, you might be a SharePoint professional. You might be a SharePoint professional if you own more SharePoint shirts than you do pairs of underwear. If you’ve thought of giving up a career in the IT world for a job taking orders at a fast food chain, you might be a SharePoint professional. You might be a SharePoint professional if the only people who understand the words that come out of your mouth are other SharePoint people. If you put the word “Share” or “SP” in front of EVERYTHING (ShareFood, SPRunner, etc… etc…) then you might be a SharePoint professional. You are probably a SharePoint professional if you love SharePoint.. you hate SharePoint… you love SharePoint… you hate SharePoint… If the only thing you’d rather do more than SharePoint is SharePint, then you are definitely a SharePoint professional. You might be a SharePoint professional if your idea of name dropping is “Andrew Connell says…” or “According to Todd Klindt”… or even “Well, when I was stuck in a Turkish prison with Joel Oleson…”

    Read the article

  • SQL Contest – Win 10 Amazon Gift Cards worth (USD 200) and 10 NuoDB T-Shirts

    - by Pinal Dave
    This month, we have yet not run any contest so we will be running a very interesting contest with the help of good guys at NuoDB. NuoDB has just released version 2.0 and You can download NuoDB from here. NuoDB’s NewSQL distributed database is designed to be a single database that works across multiple servers, which can scale easily, and scale on demand. That’s one system that gives high connectivity but no latency, complexity or maintenance issues. MySQL works in some circumstances, but a period of growth isn’t one of them. So as a company moves forward, the MySQL database can’t keep pace. Data storage and data replication errors creep in. Soon the diaspora of the offices becomes a problem. Your telephone system isn’t just distributed, it is literally all over the place. You can read my detailed article about how Why VoIP Service Providers Should Think About NuoDB’s Geo Distribution. Here is the contest: Contest Part 1: NuoDB R2.0 delivered a long list of improvements and new features. List three of the major features of NuoDB 2.0. Here is the hint1, hint2, hint3. Contest Part 2: Download NuoDB using this link. Once you download NuoDB, leave a comment over here with the name of the platform and installer size. (For example Windows Platform Size abc.dd MB) Here is the what you can win! Giveaways 10 Amazon Gift Card (Each of USD 20 – total USD 200) 10 Amazingly looking NuoDB T-Shirts (For the first 10 downloads) Rules Participate before Oct 28, 2013. All the valid answers will be published after Oct 28, 2013 and winners will receive an email on Nov 1st, 2013. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • Are `break` and `continue` bad programming practices?

    - by Mikhail
    My boss keeps mentioning nonchalantly that bad programmers use break and continue in loops. I use them all the time because they make sense; let me show you the inspiration: function verify(object) { if (object->value < 0) return false; if (object->value > object->max_value) return false; if (object->name == "") return false; ... } The point here is that first the function checks that the conditions are correct, then executes the actual functionality. IMO same applies with loops: while (primary_condition) { if (loop_count > 1000) break; if (time_exect > 3600) break; if (this->data == "undefined") continue; if (this->skip == true) continue; ... } I think this makes it easier to read & debug; but I also don't see a downside. Please comment.

    Read the article

  • And We’re Off

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} We are well into Oracle OpenWorld 2012, and what a couple days it has been! From day one and two of the Oracle PartnerNetwork Exchange Program, a jammin’ AfterDark reception atop the Metreon City View Terrance, and some major keynotes around Cloud to go with it. We think it’s safe to say we are off to a running start! With all the excitement buzzing around the floor, we couldn’t help but ask YOU our partners, just what you’re looking forward to the most this week. Is it our Test Fest, or possibly our Social Media Rally Station at the OPN Lounge, or our 40+ general sessions? Whatever it is, we can’t wait to exceed your expectations! Watch this awesome video below to find out what some other OPN partners like you are talking about this week! See you on the Floor,The OPN Communications Team

    Read the article

  • I want to hit Apex SQL with a big stick

    - by Michael Stephenson
    <Whinge> Thought id just have a little whinge about this product which caused me a load of grief the other day..... So the background was that my development machine had a completely full hard disk which I needed to sort out.  Upon investigation I found the issue was that the msdb database had managed to get very large. This was caused because a long time ago (and I cant even remember why) I tried out Apex SQL.  After a few days I decided to uninstall it and thought nothing more of it.  What I didnt realise was that uninstalling it doesnt actually uninstall it (and it doesnt inform you about this), but there was still some assemblies left on my machine.  Everytime SQL Server was running it was starting the Apex SQL Connection monitor which was then running in the background and regularly recording information in the msdb database.  Over time it had recorded enough to fill the disk. The below article advises how to sort this out by removing this fully so if your having a problem then try this out:http://knowledgebase.apexsql.com/2007/08/how-to-uninstall-apexsqlconnectionmonit_09.htm Once this was sorted out its interesting to read the above article because I just dont think the approach used by the vendor of this software is a very good one.  So for the Apex team just wanted to pass on a thought: If I want to uninstall your product you should tell me if stuff is left on the machine especially if a process will be running which is going to fill my machine with useless data, </Whinge>

    Read the article

  • The Debut of Oracle Database Firewall at RSA 2011

    - by Troy Kitch
    We're very proud of the coverage and headlines Oracle Database Firewall made this past week during RSA Conference 2011 in San Francisco. In case you missed our previous post, we announced the availability of this latest addition to the Oracle Defense-in-Depth database security solutions. The announcement was picked up many publications including eWeek, CRN, InformationWeek and more. Here is just some of the press on this very important security solution: "It's rare to find a new product category these days, but I think a new product from Oracle fills the bill. In the crowded enterprise security field, that's saying something." Enterprise System Journal: A New Approach to Database Security By James E. Powell "Databases and the content they store are among the most valuable IT assets - and the most targeted by hackers. In an effort to help secure databases, Oracle today is launching the new Oracle Database Firewall as an approach to defend databases against SQL injection and other database attacks." Database Journal: Oracle Debuts Database Firewall (also appeared in InternetNews.com) By Sean Michael Kerner "Oracle Database Firewall understands SQL-statement formats, and can be configured to blacklist and whitelist traffic based on source. When it detects suspicious statements within SQL traffic -- ones that might indicate SQL injection attacks, for example -- it can replace them with neutral statements that will keep the session running without allowing potentially harmful traffic through." Network World: Oracle Database Firewall defuses SQL injection attacks By Tim Green "The firewall uses "SQL grammar analysis" to prevent SQL injection attacks and other attempts to grab information. The Oracle Database Firewall features white and black lists policies, exceptions and rules that mark the time of day, IP address, application and user." ZDNet: RSA Roundup: Oracle Database Firewall By Larry Dignan "The database giant announced Oracle Database Firewall on Feb. 14 at the RSA Conference in San Francisco. The firewall application establishes a "defensive perimeter" around databases by monitoring and enforcing normal application behavior in real-time, the company said." eWEEK: Oracle Database Firewall Delivers Vendor-Agnostic Security By Fahmida Y. Rashid

    Read the article

  • Antenna Aligner Part 8: It’s Alive!!!

    - by Chris George
    Finally the day has come, Antenna Aligner v1.0.1 has been uploaded to the AppStore and . “Waiting for review” .. . fast forward 7 days and much checking of emails later WOO HOO! Now what? So I set my facebook page to go live  https://www.facebook.com/AntennaAligner, and started by sending messages to my mates that have iphones! Amazingly a few of them bought it! Similarly some of my colleagues were also kind enough to support me and downloaded it too! Unfortunately the only way I knew they had bought is was from them telling me, as the iTunes connect data is only updated daily at about midday GMT. This is a shame, surely they could provide more granular updates throughout the day? Although I suppose once an app has been out in the wild for a while, daily updates are enough. It would, however, be nice to get a ping when you make your first sale! I would have expected more feedback on my facebook page as well, maybe I’m just expecting too much, or perhaps I’ve configured the page wrong. The new facebook timeline layout is just confusing, and I’m not sure it’s all public, I’ll check that! So please take a look and see what you think! I would love to get some more feedback/reviews/suggestions… Oh and watch out for the Android version coming soon!

    Read the article

  • flickr, other account types not appearing in online-accounts

    - by Fen
    Using Shotwell, I discovered that to publish to Flickr I need to set up an online account. But the online-accounts system settings only has support for Google, Facebook, Windows Live, Microsoft Exchange and Enterprise Login (Kerberos). How do I add account types? These appear to be properly installed (dpkg-reconfigure returns silently): gnome-control-center-signon is already the newest version. account-plugin-yahoo is already the newest version. account-plugin-flickr is already the newest version. Here's the config file (I think): > cat /usr/share/applications/gnome-online-accounts-panel.desktop [Desktop Entry] Name=Online Accounts Comment=Manage online accounts Exec=gnome-control-center online-accounts Icon=goa-panel Terminal=false Type=Application StartupNotify=true Categories=GNOME;GTK;Settings;DesktopSettings;X-GNOME-Settings-Panel;X-GNOME-PersonalSettings; OnlyShowIn=GNOME;XFCE X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=gnome-control-center X-GNOME-Bugzilla-Component=Online Accounts X-GNOME-Bugzilla-Version=3.4.2 X-GNOME-Settings-Panel=online-accounts # Translators: those are keywords for the online-accounts control-center panel Keywords=Google;Facebook;Flickr;Twitter;Yahoo;Web;Online;Chat;Calendar;Mail;Contact; X-Ubuntu-Gettext-Domain=gnome-control-center-2.0 History: Started out with Ubuntu (64-bit), then in 12.04 installed xubuntu-desktop and have been using that. Upgraded to 12.10.

    Read the article

  • 5 ways the Exceptional DBA Award could boost your career

    - by Rebecca Amos
    Winning the Exceptional DBA Award won’t just get you full conference registration for the PASS Summit – it could also change your life and career. With a little help from our past winners, here are the top 5 ways the Exceptional DBA Award could take your career to the next level: 1. Recognition from your peers As 2009 winner Josef Richberg says, “Being recognized by your peers is the highest honor one can receive.” Whether you enter yourself, or are nominated by a friend or colleague, the fact that the winner is selected by the SQL Server community is a great chance for your peers to recognize your achievements as a DBA. 2. Boost your CV Winning the Exceptional DBA Award not only shows that you excel as a DBA, but that SQL Server experts think so too – a huge vote of confidence for any prospective employer. 2008 winner Dan McClain agrees, “It brings another level of 'wow' to my resume”. 3. Networking opportunities within the community Whether you want to increase your experience as a writer, speaker or blogger, winning the Exceptional DBA Award can open up new opportunities within the SQL Server community. Plus you’ll make new friends along the way, as Josef has discovered: “It is an unbelievable community that has become an extended family.” 4. Award ceremony at the world's largest technical SQL Server conference The Exceptional DBA Award is presented at the PASS Summit, giving you great networking opportunities and a chance to be seen by people throughout the SQL Server community. 5. Increased personal confidence Finally, the Exceptional DBA Award should give a huge boost to your personal confidence. Last year’s winner, Tracy Hamlin has certainly found this: “The recognition has given me new confidence and the drive to accomplish even loftier goals.” Read the full interview with our past winners to find out how why they’re encouraging you to enter this year’s Exceptional DBA Awards. Already inspired? Then why not get started on your entry straightaway: www.exceptionaldba.com

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • Vision, Integration, Ability—Oracle is once again positioned as an E-Commerce Leader

    - by Jeri Kelley
    The new Gartner report is the fifth successive Magic Quadrant for E-Commerce to position Oracle as a leader. We’re proud of the result, but we’re not too surprised. Oracle Commerce’s functionality is uniquely aligned with a number of the major market trends Gartner describes in its report: from customers ‘expecting a seamless buying experience across all channels’, to organizations seeking to consolidate ‘B2B and B2C applications with a single underlying platform’. What we think sets Oracle Commerce apart Why are we a leader? We believe the key strengths of Oracle Commerce include: Outstanding Scalability and VersatilityOracle has a long and enviable track record of delivering B2B and B2C e-commerce solutions, and the Oracle Commerce solution supports a broad range of vertical industries – from retail to telecom, and manufacturing to distribution. Additionally, Oracle Commerce is engineered to scale simply and quickly to meet the changing needs of the enterprise. Oracle IntegrationOur commitment to seamless solutions integration allows customers to get the most from our ever evolving range of e-commerce and CX products—and deliver consistent, relevant, and personalized cross-channel buying experiences that drive customer satisfaction, and boost revenue. Experience and VisionOracle has a long and impressive history of delivering B2B and B2C e-commerce solutions to the world’s best brands. We’re constantly putting this experience to good use, and making our solutions even smarter. With powerful merchandising and business tools, and advanced promotions capabilities, Oracle Commerce is one of the most forward-thinking e-commerce solutions around. Read the reportYou can read Gartner’s full report here, or click here to find out more about our celebrated platform.

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >