Search Results

Search found 12047 results on 482 pages for 'general debugging tidbits'.

Page 343/482 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by J Swaroop
    Re-posting Bruce Tierney's original post - albeit a day late: I reckon this is Oracle's most interesting launch this year. Enjoy! The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • SQL SERVER – Sharing your ETL Resources Across Applications with Ease

    - by pinaldave
    Frequently an organization will find that the same resources are used in multiple ETL applications, for example, the same database, general purpose processing logic, or file system locations.  Creating an easy way to reuse these resources across multiple applications would increase efficiency and reduce errors.  Moreover, not every ETL developer has the same skill set, and it is likely that one developer will be more adept at writing code while another is more comfortable configuring database connections.  Real productivity gains will come when these developers are able to work independently while still making their work available to others assigned to the same project.  These are the benefits of a centralized version control system. Of course, most version control systems could be used to store and serve files, but the real need is to store and serve entire ETL applications so that each developer’s ongoing work can immediately benefit from another developer’s completed work.  In other words, the version control system needs to be tightly integrated with the tools used to develop the ETL application. The following screen shot shows such a tool. Desktop ETL tool that tightly integrates with a central version control system Developers can checkout or commit entire projects or just a single artifact.  Each artifact may be managed independently so if you need to go back to an earlier version of one artifact, changes you may have made to other artifacts are not lost.  By being tightly integrated into the graphical environment used to create and edit the project artifacts, it is extremely easy and straight-forward to move your files to and from the version control system and there is no dependency on another vendor’s version control system.  The built in version control system is optimized for managing the artifacts of ETL applications. It is equally important that the version control system supports all of the actions one typically performs such as rollbacks, locking and unlocking of files, and the ability to resolve conflicts.  Note that this particular ETL tool also has the capability to switch back and forth between multiple version control systems. It also needs to be easy to determine the status of an artifact.  Not just that it has been committed or modified, but when and by whom.  Generally you must query the version control system for this information, but having it displayed within the development environment is more desirable. Who’s ETL tool works in this fashion?  Last month I mentioned the data integration solution offered by expressor software.  The version control features I described in this post are all available in their just released expressor 3.1 Standard Edition through the integration of their expressor Studio development environment with a centralized metadata repository and version control system. You can download their Studio application, which is free, or evaluate the full Standard Edition on your own hardware.  It may be worth your time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Silverlight Cream for February 13, 2011 -- #1046

    - by Dave Campbell
    In this Issue: Loek van den Ouweland, Colin Eberhardt, Rudi Grobler, Joost van Schaik, Mike Taulty(-2-, -3-), Deborah Kurata, David Kelley, Peter Foot, Samuel Jack(-2-), and WindowsPhoneGeek(-2-). Above the Fold: Silverlight: "Silverlight Simple MVVM Commanding" Deborah Kurata WP7: "WP7 CustomInputPrompt control with Cancel button" WindowsPhoneGeek Expression Blend: "Silverlight Templated Image Button with two images" Loek van den Ouweland Shoutouts: Dave Campbell posted a write-up about the project he's on and the use of Sterling: Sterling Object-Oriented Database for ISO 1.0 Released!... Also see Jeremy Likness' post on the 1.0 release: Sterling Object-Oriented Database 1.0 RTM Not necessarily Silverlight, but darn cool, a great control by Sasha Barber: WPF : A Weird 3d based control snoutholder announced new content: Windows Phone 7 QuickStarts Live! From SilverlightCream.com: Silverlight Templated Image Button with two images Loek van den Ouweland has a video tutorial up for creating an ImageButton with a hover state... Expression Blend coolness, and check out the external links he has to their training site. Windows Phone 7 Performance Measurements – Emulator vs. Hardware Colin Eberhardt's latest is a popular post comparing performance metrics between the WP7 emulator and a real device. Mileage may vary, but I'm pretty sure the overall results are conculsive, and should help the way you view your app as you're building in the emulator. WP7: WebClient vs HttpWebRequest Rudi Grobler's latest is a discussion of WebClient and HttpWebRequest, gives coding examples of each plus discussion of why you may choose one over the other... and pay attention to his comment about mobile providers. A Blendable Windows Phone 7 / Silverlight clipping behavior Joost van Schaik posted this WP7/Silverlight clipping behavior he developed because all the other solutions were not blendable. Another really useful piece of code from Joost! Blend Bits 22–Being Stylish Mike Taulty has 3 more episodes in his Blend Bits series... first up is on one Styles... explicit, implicit, inheriting... you name it, he's covering it! Blend Bits 23–Templating Part 1 MIke Taulty then has the beginning of a series within his Blend Bits series on Templating. This is something you just have to either bite the bullet and go with Blend to do, or consume someone else's work. Mike shows us how to do it ourself by tweaking the visual aspects of a checkbox Blend Bits 24–Templating Part 2 In part 2 of the Templating series, Mike Taulty digs deeper into Blend and cracks open the Listbox control to take a bunch of the inner elements out for a spin... fun stuff and great tutorial, Mike! Silverlight Simple MVVM Commanding Deborah Kurata has another great MVVM post up... if you don't have your head wrapped around commanding yet, this is a good place to start that process... VB and C# as always. App Development for Windows Phone 7 101 David Kelley goes through the basics of producing a WP7 app both from the Silverlight and XNA side... good info and good external links to get you going. Copyable TextBlock for Windows Phone Peter Foot takes a look at the Copy/Paste functionality in WP7 and how to apply it to a TextBlock... which is NOT an out-of-the-box solution. How to deploy to, and debug, multiple instances of the Windows Phone 7 emulator Samuel Jack has a couple posts up this week... first is this clever one on running multiple copies of the emulator at once... too cool for debugging a multi-player game! Multi-player enabling my Windows Phone 7 game: Day 3 – The Server Side Samuel Jack's latest is a detailed look at his day 3 adventure of taking his multi-player game to WP7... lots of information and external links... what do you say, give him another day? :) WP7 CustomInputPrompt control with Cancel button WindowsPhoneGeek has a couple more posts up... first is this "CustomInputPrompt" control based off the InputPrompt from Coding4Fun. Implementing Windows Phone 7 DataTemplateSelector and CustomDataTemplateSelector In his latest post, WindowsPhoneGeek writes a DataTemplateSelector to allow different data templates for different list elements based on the type of the element. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for February 14, 2011 -- #1047

    - by Dave Campbell
    In this Issue: Mohamed Mosallem, Tony Champion, Gill Cleeren, Laurent Bugnion, Deborah Kurata, Jesse Liberty(-2-), Tim Heuer, Mike Taulty, John Papa, Martin Krüger, and Jeremy Likness. Above the Fold: Silverlight: "Binding to a ComboBox in Silverlight : A Gotcha" Tony Champion WP7: "An Ultra Light Windows Phone 7 MVVM Framework" Jeremy Likness Shoutouts: Steve Wortham has a post up discussing Silverlight 5, HTML5, and what the future may bring From SilverlightCream.com: Silverlight 4.0 Tutorial (12 of N): Collecting Attendees Feedback using Windows Phone 7 Mohamed Mosallem is up to number 12 in his Silverlight tutorial series. He's continuing his RegistrationBooth app, but this time, he's building a WP7 app to give attendee feedback. Binding to a ComboBox in Silverlight : A Gotcha If you've tried to bind to a combobox in Silverlight, you've probably either accomplished this as I have (with help) by having it right once, and continuing, but Tony Champion takes the voodoo out of getting it all working. Getting ready for Microsoft Silverlight Exam 70-506 (Part 5) Gill Cleeren has Part 5 of his exam preparation post up on SilverlightShow. As with the others, he provides many external links to good information. Referencing a picture in another DLL in Silverlight and Windows Phone 7 Laurent Bugnion explains the pitfalls and correct way to reference an image from a dll... good info for loading images such as icons for Silverlight in general and WP7 also. Silverlight MVVM Commanding II Deborah Kurata has a part 2 up on MVVM Commanding. The first part covered the built-in commanding for controls that inherit from ButtonBase... this post goes beyond that into other Silverlight controls. Reactive Drag and Drop Part 1 This Drag and Drop with Rx post by Jesse Liberty is the 4th in his Rx series. He begins with a video from the Rx team and applies reactive programming to mouse movements. Yet Another Podcast #24–Reactive Extensions On the heels of his previous post on Rx, in his latest 'Yet Another Podcast', Jesse Liberty chats with Matthew Podwysocki and Bart De Smet about Reactive Extensions. Silverlight 4 February 2011 Update Released Today Tim Heuer announced the release of the February 2011 Silverlight 4 release. Check out Tim's post for information about what's contained in this release. Blend Bits 25–Templating Part 3 In his 3rd Templating tutorial in BlendBits, Mike Taulty demonstrates the 'Make into Control' option rather than the other way around. Silverlight TV 61: Expert Chat on Deep Zoom, Touch, and Windows Phone John Papa interviews David Kelley in the latest Silverlight TV... David is discussing touch in Silverlight and for WP7 and his WP7 apps in the marketplace. Simple Hyperlinkbutton style Martin Krüger has a cool Hyperlink style available at the Expression Gallery. Interesting visual for entertaining your users. An Ultra Light Windows Phone 7 MVVM Framework Jeremy Likness takes his knowledge of MVVM (Jounce), and WP7 and takes a better look at what he'd really like to have for a WP7 framework. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to convert from amateur web app developer to professional web apper?

    - by Nilesh
    This is more of a practical question on web app development and deployment process. Here is some background information. I use PHP for server side scripting, javascript for client side. I use Netbeans and notepad++. I user Firefox and firebug for debugging and testing. The process I use is very amateurish, I code something in netbeans, something in notepad++ and since there is nothing to compile, I just refresh the firefox browser and test it. This is convenient and faster compared to the Java development enviornment where you would have to atleast compile and deploy the jar files before you could run them. I have been thinking of putting a formal process in my development and find it hard putting it together. There are so many things to do before you can deploy your final web app. I keep hearing jslint, compression, unit testing (selenium), Ant, YUI compressor etc but I am now looking for some steps that I can take to make me more organized. For e.g I use netbeans but don't use any projects within it. I directly update the files. I don't use any source control but use my Iomega backup that saves each save into a different version and at the end of the day I backup the dev directory to my Amazon s3 account. For me development environment is just a DEV directory, TEST is my intermediate stage and PROD is the final directory that gets pushed out to the server. But all these directories are in the same apache home. I have few php scripts that just copies the needed files into the production directory. Thats about it for my development approach. I know I am missing the following - Regression testing (manual or automated ??) - automated testing (selenium ??) - automated deployment (ANT ??) - source control (svn ??) - quality control (jslint ??) Can someone explain what are the missing steps and how to go about filling those steps in order to have more professional approach. I am looking for tools with example tutorials in streamlining the whole development to deployment stage. For me just getting a hang of database, server side and client side development all in synchronization was itself a huge accomplishment. And now I feel there is lot missing before you can produce quality web application. For e.g I see lot of mention about using automated testing but how to put in use with respect to javascript and php. How to use ANT for the deployment etc. Is this all too much for a single or two person development team? Is there a way to automate all the above so that I just keep coding in netbeans and then run a batch file that is configured once and run it everytime to produce the code in the production directory? Lot of these information is scattered on the web and here, if someone can guide I would be happy to consolidate here. Thank you for your patience :)

    Read the article

  • LibGDX Box2D Body and Sprite AND DebugRenderer out of sync

    - by Free Lancer
    I am having a couple issues with Box2D bodies. I have a GameObject holding a Sprite and Body. I use a ShapeRenderer to draw an outline of the Body's and Sprite's bounding boxes. I also added a Box2DDebugRenderer to make sure everything's lining up properly. My problem is the Sprite and Body at first overlap perfectly, but as I turn the Body moves a bit off the sprite then comes back when the Car is facing either North or South. Here's an image of what I mean: (Not sure what that line is, first time to show up) BLUE is the Body, RED is the Sprite, PURPLE is the Box2DDebugRenderer. Also, you probably noticed a purple square in the top right corner. Well that's the Car drawn by the Box2D Debug Renderer. I thought it might be the camera but I've been playing with the Cameras for hours and nothing seems to work. All give me weird results. Here's my code: Screen: public void show() { // --------------------- SETUP ALL THE CAMERA STUFF ------------------------------ // battleStage = new Stage( 720, 480, false ); // Setup the camera. In Box2D we operate on a meter scale, pixels won't do it. So we use // an Orthographic camera with a Viewport of 24 meters in width and 16 meters in height. battleStage.setCamera( new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ) ); battleStage.getCamera().position.set( CAM_METER_WIDTH / 2, CAM_METER_HEIGHT / 2, 0 ); // The Box2D Debug Renderer will handle rendering all physics objects for debugging debugger = new Box2DDebugRenderer( true, true, true, true ); //debugCam = new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ); } public void render(float delta) { // Update the Physics World, use 1/45 for something around 45 Frames/Second for mobile devices physicsWorld.step( 1/45.0f, 8, 3 ); // 1/45 for devices // Set the Camera matrices and clear the screen Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); battleStage.getCamera().update(); // Draw game objects here battleStage.act(delta); battleStage.draw(); // Again update the Camera matrices and call the debug renderer debugCam.update(); debugger.render( physicsWorld, debugCam.combined); // Vehicle handles its own interaction with the HUD // update all Actors movements in the game Stage hudStage.act( delta ); // Draw each Actor onto the Scene at their new positions hudStage.draw(); } Car: (extends Actor) public Car( Texture texture, float posX, float posY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( WIDTH * Consts.PIXEL_METER_RATIO, HEIGHT * Consts.PIXEL_METER_RATIO ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); // set the origin to be at the center of the body mSprite.setPosition( posX * Consts.PIXEL_METER_RATIO, posY * Consts.PIXEL_METER_RATIO ); // place the car in the center of the game map FixtureDef carFixtureDef = new FixtureDef(); mBody = Physics.createBoxBody( BodyType.DynamicBody, carFixtureDef, mSprite ); } public void draw() { mSprite.setPosition( mBody.getPosition().x * Consts.PIXEL_METER_RATIO, mBody.getPosition().y * Consts.PIXEL_METER_RATIO ); mSprite.setRotation( MathUtils.radiansToDegrees * mBody.getAngle() ); // draw the sprite mSprite.draw( batch ); } Physics: (Create the Body) public static Body createBoxBody( final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; boxBodyDef.position.x = pSprite.getX() / Consts.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Consts.PIXEL_METER_RATIO; // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Consts.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Consts.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = BattleScreen.getPhysicsWorld().createBody(boxBodyDef); boxBody.createFixture(pFixtureDef); } Sorry for all the code and long description but it's hard to pin down what exactly might be causing the problem.

    Read the article

  • O the Agony - Merging Scrum and Waterfall

    - by John K. Hines
    If there's nothing else to know about Scrum (and Agile in general), it's this: You can't force a team to adopt Agile methods.  In all cases, the team must want to change. Well, sure, you could force a team.  But it's going to be a horrible, painful process with a huge learning curve made even steeper by the lack of training and motivation on behalf of the team.  On a completely unrelated note, I've spent the past three months working on a team that was formed by merging three separate teams.  One of these teams has been adopting and using Agile practices like Scrum since 2007, the other was in continuous bug fix mode, releasing on average one new piece of software per year using semi-Waterfall methods.  In particular, one senior developer on the Waterfall team didn't see anything in Agile but overhead. Fast forward through three months of tension, passive resistance, process pushback, and you have seven people who want to change and one who explicitly doesn't.  It took two things to make Scrum happen: The team manager took a class called "Agile Software Development using Scrum". The team lead explained the point of Agile was to reduce the workload of the senior developer, with another senior developer and the manager present. It's incredible to me how a single person can strongly influence the direction of an entire team.  Let alone if Scrum comes down as some managerial decree onto a functioning team who have no idea what it is.  Pity the fool. On the bright side, I am now an expert at drawing Visio process flows.  And I have some gentle advice for any first-level managers: If you preside over a team process change, it's beneficial to start the discussion on how the team will work as early as possible.  You should have a vision for this and guide the discussion, even if decisions are weeks away.  Don't always root for the underdog.  It's been my experience that managers who see themselves as compassionate and caring spend a great deal of time understanding and advocating for the one person on the team who feels left out.  Remember that by focusing on this one person you risk alienating the rest of the team, allow tension to build, and delay the resolution of the problem. My way would have been to decree Scrum, force all of my processes on everyone else, and use the past three months ironing out the kinks.  Which takes us all the way back to point number one. Technorati tags: Scrum Scrum Process Scrum and Waterfall

    Read the article

  • ASP.Net or WPF(C#)?

    - by Rachel
    Our team is divided on this and I wanted to get some third-party opinions. We are building an application and cannot decide if we want to use .Net WPF Desktop Application with a WCF server, or ASP.Net web app using jQuery. I thought I'd ask the question here, with some specs, and see what the pros/cons of using either side would be. I have my own favorite and feel I am biased. Ideally we want to build the initial release of the software as fast as we can, then slow down and take time to build in the additional features/components we want later on. Above all we want the software to be fast. Users go through records all day long and delays in loading records or refreshing screens kills their productivity. Application Details: I'm estimating around 100 different screens for initial version, with plans for a lot of additional screens being added on later after the initial release. We are looking to use two-way communication for reminder and event systems Currently has to support around 100 users, although we've been told to allow for growth up to 500 users We have multiple locations Items to consider (maybe not initially in some cases but in future releases): Room for additional components to be added after initial release (there are a lot of of these... perhaps work here than the initial application) Keyboard navigation Performance is a must Production Speed to initial version Low maintenance overhead Future support Softphone/Scanner integration Our Developers: We have 1 programmer who has been learning WPF the past few months and was the one who suggested we use WPF for this. We have a 2nd programmer who is familiar with ASP.Net and who may help with the project in the future, although he will not be working on it much up until the initial release since his time is spent maintaining our current software. There is me, who has worked with both and am comfortable in either We have an outside company doing the project management, and they are an ASP.Net company. We plan on hiring 1-2 others, however we need to know what direction we are going in first Environment: General users are on Windows 2003 server with Terminal Services. They connect using WYSE thin-clients over an RDP connection. Admin staff has their own PCs with XP or higher. Users are allowed to specify their own resolution although they are limited to using IE as the web browser. Other locations connects to our network over a MPLS connection Based on that, what would you choose and why? I am asking here instead of SO because I am looking for opinions and not answers

    Read the article

  • When should complexity be removed?

    - by ElGringoGrande
    Prematurely introducing complexity by implementing design patterns before they are needed is not good practice. But if you follow all (or even most of) the SOLID principles and use common design patterns you will introduce some complexity as features and requirements are added or changed to keep your design as maintainable and flexible as needed. However once that complexity is introduced and working like a champ when do you removed it? Example. I have an application written for a client. When originally created there where several ways to give raises to employees. I used the strategy pattern and factory to keep the whole process nice and clean. Over time certain raise methods where added or removed by the application owner. Time passes and new owner takes over. This new owner is hard nosed, keeps everything simple and only has one single way to give a raise. The complexity needed by the strategy pattern is no longer needed. If I where to code this from the requirements as they are now I would not introduce this extra complexity (but make sure I could introduce it with little or no work should the need arise). So do I remove the strategy implementation now? I don't think this new owner will ever change how raises are given. But the application itself has demonstrated that this could happen. Of course this is just one example in an application where a new owner takes over and has simplified many processes. I could remove dozens of classes, interfaces and factories and make the whole application much more simple. Note that the current implementation does works just fine and the owner is happy with it (and surprised and even happier that I was able to implement her changes so quickly because of the discussed complexity). I admit that a small part of this doubt is because it is highly likely the new owner isn't going to use me any longer. I don't really care that somebody else will take this over since it has not been a big income generator. But I do care about 2 (related) things I care a bit that the new maintainer will have to think a bit harder when trying to understand the code. Complexity is complexity and I don't want to anger the psycho maniac coming after me. But even more I worry about a competitor seeing this complexity and thinking I just implement design patterns to pad my hours on jobs. Then spreading this rumor to hurt my other business. (I have heard this mentioned.) So... In general should previously needed complexity be removed even though it works and there has been a historically demonstrated need for the complexity but you have no indication that it will be needed in the future? Even if the question above is generally answered "no" is it wise to remove this "un-needed" complexity if handing off the project to a competitor (or stranger)?

    Read the article

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Sqlite &amp; Entity Framework 4

    - by Dane Morgridge
    I have been working on a few client app projects in my spare time that need to persist small amounts of data and have been looking for an easy to use embedded database.  I really like db4o but I'm not wanting to open source this particular project so it was not an option.  Then I remembered that there was an ADO.NET provider for sqlite.  Being a fan of sqlite in general, I downloaded it and gave it an install.  The installer added tooling support for both Visual Studio 2008 & 2010 which is nice because I am working almost exclusively in 2010 at the moment.  I noticed that the provider also had support for Entity Framework, but not specifically v4.  I created a database using the tools that get installed with Visual Studio and all seemed to work fine.  I went on to create an Entity Framework context and selected the sqlite database and to my surprise it worked with out any problems.  The model showed up just like it would for any database and so I started to write a little code to test and then.. BAM!.. Exception. "Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information." A quick bit of searching on Bing found the answer.  To get it working, you need to include the following code in your web.config file: 1: <startup useLegacyV2RuntimeActivationPolicy="true"> 2: <supportedRuntime version="v4.0" /> 3: </startup> And then everything magically works.  Entity Framework 4 features worked, like lazy loading and even the POCO templates worked.  The only thing that didn't work was the model first development.  The SQL generated was for SQL Server and of course wouldn't run on sqlite without some modifications. The only other oddity I found was that in order to have an auto incrementing id, you have to use the full integer data type for sqlite; a regular int won't do the trick.  This translates to an Int64, or a long when working with it in Entity Framework.  Not a big deal, but something you need to be aware of. All in all, I am quite impressed with the Entity Framework support I found with sqlite.  I wasn't really expecting much at all, and I was pleasantly surprised. I downloaded the ADO.NET sqlite provider from http://sqlite.phxsoftware.com/.  If you want to use an embedded database with Entity Framework, give it a look.  It will be well worth your time.

    Read the article

  • How do you structure computer science University notes?

    - by Sai Perchard
    I am completing a year of postgraduate study in CS next semester. I am finishing a law degree this year, and I will use this to briefly explain what I mean when I refer to the 'structure' of University notes. My preferred structure for authoring law notes: Word Two columns 0.5cm margins (top, right, bottom, middle, left) Body text (10pt, regular), 3 levels of headings (14/12/10pt, bold), 3 levels of bulleted lists Color A background for cases Color B background for legislation I find that it's crucial to have a good structure from the outset. My key advice to a law student would be to ensure styles allows cases and legislation to be easily identified from supporting text, and not to include too much detail regarding the facts of cases. More than 3 levels of headings is too deep. More than 3 levels of a bulleted list is too deep. In terms of CS, I am interested in similar advice; for example, any strategies that have been successfully employed regarding structure, and general advice regarding note taking. Has latex proved better than Word? Code would presumably need to be stylistically differentiated, and use a monospaced font - perhaps code could be written in TextMate so that it could be copied to retain syntax highlighting? (Are notes even that useful in a CS degree? I am tempted to simply use a textbook. They are crucial in law.) I understand that different people may employ varying techniques and that people will have personal preferences, however I am interested in what these different techniques are. Update Thank you for the responses so far. To clarify, I am not suggesting that the approach should be comparable to that I employ for law. I could have been clearer. The consensus so far seems to be - just learn it. Structure of notes/notes themselves are not generally relevant. This is what I was alluding to when I said I was just tempted to use a textbook. Re the comment that said textbooks are generally useless - I strongly disagree. Sure, perhaps the recommended textbook is useless. But if I'm going to learn a programming language, I will (1) identify what I believe to be the best textbook, and (2) read it. I was unsure if the combination of theory with code meant that lecture notes may be a more efficient way to study for an exam. I imagine that would depend on the subject. A subject specifically on a programming language, reading a textbook and coding would be my preferred approach. But I was unsure if, given a subject containing substantive theory that may not be covered in a single textbook, people may have preferences regarding note taking and structure.

    Read the article

  • Create Image Maps with GIMP

    - by SGWellens
    Having a clickable image in a web page is not a big deal. Having an image in a web page with clickable hotspots is a big deal. The powerful GIMP editor has a tool to make creating clickable hotspots much easier. GIMP stands for GNU Image Manipulation Program. Its home page and download links are here: http://www.gimp.org/ (it is completely free). Beware: GIMP is an extraordinarily advanced and powerful image editor. If you wish to use it for general image editing tasks, you have a steep learning curve to climb. FYI: I used it to create the shadows you see on the images below. Fortunately, the tool to make Image Maps is separate from the main program. To start, open an image with GIMP or, drag and drop an image onto the GIMP main window. I'm using the image of a bar graph. Next, we have to find the Image Map tool and launch it (Filters->Web->Image Map…): Why is the Image Map tool under Filters and not Tools? I don't know. It's mystery—much like the Loch Ness Monster, the Bermuda Triangle, or why my socks keep disappearing when I do laundry. I swear I've got twenty single unmatched socks. But I digress… Here is what the Image Map tool looks like: If we click the blue 'I' button, we can add information to the Image Map: Now we'll use the rectangle tool to create some clickable hotspots. Select the Blue Rectangle tool, drag a rectangle, click when done and you'll get something like this: You can also make circle/oval and polygon areas. You can edit all the parameters of an image map area after drawing it. Rectangle settings (for fine tweaking): JavaScript functions (it's up to you to write them): Here is a setup with two rectangles and one polygon area: When you hit save a map file is generated that looks something like this: Paste the contents into a web page and you are almost there. I made some tweaks before it became usable: Replaced &apos; with apostrophes in the javascript functions. Changed the image path so it would find the image in my images directory Tweaked the href urls. Added Title="Some Text" to get tool tips. Cleaned out the comments. Result: The final markup (with JavaScript function): function ImageMapMouseHover(Msg) { $("#Label1").html(Msg); } It may seem like a lot of bother but, the tool does the heavy lifting: i.e. the coordinates. Getting the regions positioned and sized is easy using a visual tool…much better than doing it by hand. This, of course, isn't a full treatise on the tool but it should give you enough information to decide if it's helpful. I hope someone finds this useful Steve Wellens

    Read the article

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by Bruce Tierney
    The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • 30 Steps to Master ASP.NET MVC Application development

    - by Rajesh Pillai
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Welcome Readers!,   I am starting out a new series on ASP.NET  MVC skill building which will be posted over the next couple of weeks.  Let me know your thoughts on the content, which I have planned and a couple of them has been taken from ASP.NET MVC2 Cookbook. (NOTE: Only the heading has been taken, the content will be not :)).   Do let me know what you would like to see, or any additional inputs or ideas to cover in this topics.  The 30 steps are oultined below for quick reference.  Will start filling this out quickly.   Outlined is the ‘30’ step to master ASP.NET MVC.   A Peek Into Model What is a model? Different types of model Presentation/ViewModel Model Mapping (AutoMapper)   A Peak into View How view works in ASP.NET MVC? View Engine Design Custom View Engine View Best Practices Templated Helpers Partial Views   A Peak into Controller Introduction Controller Design Controller Best Practices Asynchronous Controller Custom Action Result Action Filters Controller Factory to use with IOC   Routes Explanation Routes from the database Routes from XML More complex routing   Master Pages Basics Setting Master Page Dynamically   Working with data in the view Repeating Views Array of check boxes Array of radio buttons Paged data CRUD Client side action Confirmation Dialog (modal window) jqGrid   Working with Forms   Validation Model Validation with DataAnnotations Using the xVal validation framework Client side validation with jQuery Validation Fluent Validation Model Binders   Templating Create strongly typed helper using T4 Custom View Templates with T4 Create custom MVC project template using T4   IOC AutoFac Ninject Unity Application   Areas   jQuery, Ajax and jQuery Plugins   State Maintenance Application State User state Cookies Webfarm   Error Handling View error handling Controller error handling ELMAH (Error Logging Modules and Handlers)   Authentication and Authorization User Registration form SignOn Process Password Reminder Membership and Roles Windows authentication Restricting access to all pages Restricting access to selected pages Restricting access to pages by role Restricting access to a controller Restricting access to selected area   Profiles and Themes Using Profiles Inheriting a Profile Migrating an anonymous profile Creating custom themes Using themes User personalized themes   Configuration Adding custom application settings in web.config Displaying custom error messages Accessing other web.config configuration elements Adding custom configuration elements to web.config Encrypting web.config sections   Tracing, Debugging and Logging   Caching Caching a whole page Caching pages based on route details Caching pages based on browser type and version Caching pages based custom strings Caching partial pages Caching application data Object Caching Using Microsoft Velocity Using MemCache Using AppFabric cache   Localization   HTTP Handlers and Modules   Security XSS/CSRF AnitForgery Encoding   HtmlHelpers Strongly typed helpers Writing custom helpers   Repository Pattern (Data access)   WF/WCF   Unit Testing   Mocking Framework   Integration Testing   Load / Performance Testing   Deployment    Once again let me know your thoughts on this.   Till then, Enjoy MVC'ing!!!

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • F# and the rose-tinted reflection

    - by CliveT
    We're already seeing increasing use of many cores on client desktops. It is a change that has been long predicted. It is not just a change in architecture, but our notions of efficiency in a program. No longer can we focus on the asymptotic complexity of an algorithm by counting the steps that a single core processor would take to execute it. Instead we'll soon be more concerned about the scalability of the algorithm and how well we can increase the performance as we increase the number of cores. This may even lead us to throw away our most efficient algorithms, and switch to less efficient algorithms that scale better. We might even be willing to waste cycles in order to speculatively execute at the algorithm rather than the hardware level. State is the big headache in this parallel world. At the hardware level, main memory doesn't necessarily contain the definitive value corresponding to a particular address. An update to a location might still be held in a CPU's local cache and it might be some time before the value gets propagated. To get the latest value, and the notion of "latest" takes a lot of defining in this world of rapidly mutating state, the CPUs may well need to communicate to decide who has the definitive value of a particular address in order to avoid lost updates. At the user program level, this means programmers will need to lock objects before modifying them, or attempt to avoid the overhead of locking by understanding the memory models at a very deep level. I think it's this need to avoid statefulness that has led to the recent resurgence of interest in functional languages. In the 1980s, functional languages started getting traction when research was carried out into how programs in such languages could be auto-parallelised. Sadly, the impracticality of some of the languages, the overheads of communication during this parallel execution, and rapid improvements in compiler technology on stock hardware meant that the functional languages fell by the wayside. The one thing that these languages were good at was getting rid of implicit state, and this single idea seems like a solution to the problems we are going to face in the coming years. Whether these languages will catch on is hard to predict. The mindset for writing a program in a functional language is really very different from the way that object-oriented problem decomposition happens - one has to focus on the verbs instead of the nouns, which takes some getting used to. There are a number of hybrid functional/object languages that have been becoming more popular in recent times. These half-way houses make it easy to use functional ideas for some parts of the program while still allowing access to the underlying object-focused platform without a great deal of impedance mismatch. One example is F# running on the CLR which, in Visual Studio 2010, has because a first class member of the pack. Inside Visual Studio 2010, the tooling for F# has improved to the point where it is easy to set breakpoints and watch values change while debugging at the source level. In my opinion, it is the tooling support that will enable the widespread adoption of functional languages - without this support, people will put off any transition into the functional world for as long as they possibly can. Without tool support it will make it hard to learn these languages. One tool that doesn't currently support F# is Reflector. The idea of decompiling IL to a functional language is daunting, but F# is potentially so important I couldn't dismiss the idea. As I'm currently developing Reflector 6.5, I thought it wise to take four days just to see how far I could get in doing so, even if it achieved little more than to be clearer on how much was possible, and how long it might take. You can read what happened here, and of the insights it gave us on ways to improve the tool.

    Read the article

  • Tile engine Texture not updating when numbers in array change

    - by Corey
    I draw my map from a txt file. I am using java with slick2d library. When I print the array the number changes in the array, but the texture doesn't change. public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; public Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); x = 0; for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); x++; } //System.out.println(""); y++; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int y = 0; y < map.length; y++) { for(int x = 0; x < map[0].length; x ++) { int textureIndex = map[x][y]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } Mouse Picking Where I change the number in the array Input input = gc.getInput(); gc.getInput().setOffset(cameraX-400, cameraY-300); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } I never ask a question until I have tried to figure it out myself. I have been stuck with this problem for two weeks. It's not like this site is made for asking questions or anything. So if you actually try to help me instead of telling me to use a debugger thank you. You either get told you have too much or too little code. Nothing is never enough for the people on here it's as bad as something like reddit. Idk what is wrong all my textures work when I render them it just doesn't update when the number in the array changes. I am obviously debugging when I say that I was printing the array and the number is changing like it should, so it's not a problem with my mouse picking code. It is a problem with my textures, but I don't know what because they all render correctly. That is why I need help.

    Read the article

  • SharePoint 2010 Hosting - ASPHostPortal :: Installing SSRS 2008 R2 on SharePoint 2010

    - by mbridge
    What do you need first? Please download SQL Server® 2008 R2 November CTP Reporting Services Add-in for Microsoft SharePoint® Technologies 2010 and please follow this steps: 1. Install a SharePoint technology instance. (Already did this when installing PowerPivot with SharePoint) 2. Install SQL Server 2008 R2 November CTP Reporting Services and specify that the report server use SharePoint Integrated mode 3. Configure Reporting Services 4. Download the Reporting Services Add-in by clicking the rsSharePoint.msi link later on this page. To start the installation immediately, click Run After installing Reporting services and the add-in your reporting server is ready to be integrated with SharePoint, in SharePoint 2010 we have some new admin screens. To integrate go to central admin, general application settings: When you successfully installed the add-in a reporting services icon will be there. Click Reporting Services Integration: Add the report server web service url (To get the URL, open the Reporting Services Configuration tool, connect to the report server, and click Web Service URL. Click the URL to verify it works. Copy the URL and paste it into Report Server Web Service URL.), select your authentication mode (windows authentication is prefered). Add a username and password of your admin account. Click ok to configure and start the integration. After the installation you can set the reporting services default. What is changed in SP2010 is that there isn’t a report library available. You have to add content types to a default library. So go to a site collection, site actions, View all site content. Create a Asset library: Now we have to make sure we can add reports to the library. To do this we have to add content types: Open the library, click on library tools, library settings, Under Content Types, click Add from existing site content types. In the Select Content Types section, in Select site content types from, click the arrow to select Reporting Services. In the Available Site Content Types list, click Report Builder, Report Data Source and Report and then click Add to move the selected content type to the Content types to add list. Now we are ready to upload reports and execute them from within our webparts: Another interesting post: - Integrating SharePoint 2010 and SQL 2008 R2

    Read the article

  • MIXing it Up a Bit

    - by andrewbrust
    Another March, another MIX.  For the fifth year running now, Microsoft has chosen to put on a conference aimed less at software development, per se, and more at the products, experiences and designs that software development can generate.  In all four prior MIX events, the focus of the show, its keynotes and breakout sessions has been on Web products.  On day 1 of MIX 2010 that focus shifted to Windows Phone 7 Series (WP7). What little we had seen of WP7 had been shown to us in a keynote presentation, given by Microsoft’s Joe Belfiore, at the Mobile World Congress in Barcelona, Spain last month.  And today, Mr. Belfiore reprised his showmanship for the MIX 2010 audience.  Joe showed us the ins and outs of WP7 and, in a breakout session, even gave us a sneak peek of Office (specifically, Excel) on WP7.  We didn’t get to see that one month ago in Barcelona, nor did get to see email messages opened for reading, which we saw today. But beyond a tour of the phone itself, impressive though that is, we got to see apps running on it.  Those apps included Associated Press news, Seesmic (a major Twitter client) and Foursquare (a social media darling).  All three ran, ran well, and looked markedly different and better from their corresponding versions on iPhone and Android.  And the games we saw looked even better. To me though, the best demos involved the creation of WP7 apps, using Silverlight in Visual Studio and Expression Blend.  These demos were so effective because they showed important apps being built in very few steps, and by Microsoft executives to boot.  Scott Guthrie showed us how to build a Twitter API app in Visual Strudio.   Jon Harris showed us how to build a photo management and viewer application in Expression Blend, using virtually no code.  Demos of apps built from scratch to F5 without the benefit of a teacher, could be challenging.  But they went off fine, without a hitch and without a ton of opaque, generated code.  Everything written, be it C# or XAML, was easily understood, and the results were impressive. That means lots of developers can do this, and I think it means a lot will.  What I’ve seen, thus far, of iPhone and Android development looks very tedious by comparison.  Development for those platforms involve a collection of tools that integrate only to a point.  Dev work for WP7 involves use of Visual Studio, Silverlight and the same debugging experience .NET developers already know.  This was very exciting for me. All the demos harkened back to days of building apps for with Visual Basic…design the front-end, put in code-behind and then hit F5.  And that makes sense, because the phone platform, and the PC of the early 90s are both, essentially, client OS machines.  The Web was minimal and the “device” was everything. Same is true of this phone.  It’s a client app contraption that fits in your pocket. And if the platforms are comparable, hopefully so too will be the draw of ease-of-development.   WP7 has the potential to make mobile developers want to switch over, and to convince enterprise developers to get into the phone scene.  Will this propel the new phone platform to new heights, and restore Microsoft’s competiveness in the mobile arena? I hope so.  I think so.  And if Microsoft uses developers to build themselves a victory, that would be beneficial and would show that Microsoft has learned from its failures, as well as its successes.  Today I saw a few beautiful apps.  Tomorrow I hope I see a slew of others; maybe not as polished, but plentiful, attractive and stable.  That would be a victory for Microsoft, and for developers.  And it would show everyone else that developers are the kingmakers.  They need cheap, efficient dev tools and lots of respect.  Microsoft has always been the company to provide that.  Hopefully, with WP7, they will return to that persona and see how very timeless it is.

    Read the article

  • GLSL subroutine not being used

    - by amoffat
    I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time: #version 420 subroutine vec2 sample_coord_type(int i); subroutine uniform sample_coord_type sample_coord; in vec2 texcoord; out vec3 color; uniform sampler2D tex; uniform int texture_size; const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}}); const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}}); subroutine(sample_coord_type) vec2 vertical_coord(int i) { return vec2(0.0, offsets[i] / texture_size); } subroutine(sample_coord_type) vec2 horizontal_coord(int i) { //return vec2(offsets[i] / texture_size, 0.0); return vec2(0.0, 0.0); // just for testing if this subroutine gets used } void main(void) { color = vec3(0.0); for (int i=0; i<{{NUM_SAMPLES}}; i++) { color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i]; color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i]; } } Here is my code for selecting the subroutine: blur_program->start(); blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER); blur_program->set_int("texture_size", width); blur_program->set_texture("tex", *deferred_output); blur_program->draw(); // draws a quad for the fragment shader to run on and: void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) { GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str()); GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str()); glUniformSubroutinesuiv(target, 1, &routine_index); // debugging int num_subs; glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs); std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n"; } I've checked for errors, and there are none. When I pass in vertical_coord as the routine to use, my scene is blurred vertically, as it should be. The routine_index variable is also 1 (which is weird, because vertical_coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal_coord, my scene is STILL blurred vertically, even though the value of routine_index is 0, suggesting that a different subroutine is being used. Yet the horizontal_coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical_coord comes first, so the shader blurs vertically always. If I put horizontal_coord first, the scene is unblurred, as expected, but then I cannot select the vertical_coord subroutine! :) Also, the value of num_subs is 2, suggesting that there are 2 subroutines compatible with my sample_coord subroutine uniform. Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?

    Read the article

  • Before the Summit of 2012

    - by Ajarn Mark Caldwell
    Today, Monday, was the first day of the PASS Summit Preconference training events, but instead I spent the day at the free SQL in the City event put on by Red Gate. For me this was not a financial decision (pre-con sessions cost extra above the general Summit registration) but rather a matter of interest.  I had already included money for pre-cons in this year’s training budget, but none of them really stood out to me, so even if the Red-Gate event were not going on at the same time, I probably would not have gone to any pre-cons this year.  However, the topics being presented at the SQL in the City event were of great interest to me.  There promised to be good information on Continuous Integration and automated deployment of database changes, which lately has been a real hot topic at my work.  And indeed, Red-Gate announced the release of a new tool (still in Early Access Program…a.k.a. Beta) which is called the Deployment Manager.  Since we are in the middle of a TFS implementation project, it will be interesting to see how this plays out and compares to what we put together with the automated builds in TFS.  But, as I understand it, the primary focus of Deployment Manager is not to be the Build process (Red Gate uses JetBrains’ Team City for that in their shop) but rather to aid in the deployment of those build packages, as well as providing easy rollback and a good visualization of which versions of software are in which environments.  It looks promising and I’ve already downloaded the installer package to play with it later. Overall, I was quite impressed with the SQL in the City event.  Having heard many current and past members of the PASS Board of Directors describe the challenges of putting on a large conference, and the growing pains that the PASS Summit has gone through, I am even more impressed that the Red Gate event ran as smoothly as it did.  And it is quite impressive the amount of money that Red Gate must have spent given that this was a no-charge event to attend, they had a very nice hot lunch, and the after-event drinks celebration.  Well done, folks! Of course it was great to hear from a variety of speakers.  Today I listened to some folks from Red Gate like Grant Fritchey (blog | @GFritchey) and David Atkinson (Product Manager for SQL Source Control and now the Deployment Manager tool set); and also Brent Ozar (blog | @BrentO) and Buck Woody (blog | @BuckWoody).  By the way, if you have never seen either Brent or Buck speak, you really should.  Different styles, but both are very entertaining and educational at the same time.  I love Buck’s sense of humor (here’s a tip…don’t be late to Buck’s session or you’ll become part of the presentation) and I praise Brent’s slides.  Brent’s style very much reminds me of that espoused by Garr Reynolds on his Presentation Zen blog (and book) and I am impressed that he can make a technical presentation so engaging. It was a great day, a great way to kick off the week, and I am excited to get into the full Summit!

    Read the article

  • Live Webcast for Skire Customers - 8 November

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Join our Important Customer Briefing live webcast with Oracle Executive Mike Sicilia to learn more about the product strategy for the combined Oracle Primavera and Skire offering. Mike Sicilia, Senior Vice President and General Manager, Oracle Primavera Global Business Unit, invites you to join him for an exclusive update on Oracle’s acquisition of substantially all of Skire’s assets. Don’t miss this special, live webcast on November 8th, Attend this online event and listen to Mike Sicilia share with you: The strategic reasons behind Oracle’s acquisition of substantially all of Skire’s assets and what it means to you and your organization Oracle’s vision to deliver the most comprehensive Enterprise Project Portfolio Management (EPPM) offering to manage the complete project lifecycle, from capital planning and construction to operations and maintenance Exciting new product releases to help organizations manage their projects and facilities with more predictability and financial control, improving profitability and operational efficiency. Oracle’s consistent commitment to customer success and product support Save your seat: register now to attend this exclusive online event and learn how the combination of Oracle Primavera and Skire can help your organization succeed. For more information about the combination of Oracle and Skire, please visit oracle.com/skire

    Read the article

  • Upcoming presentations by me at Windows Azure Events

    - by ScottGu
    I recently blogged about a big wave of improvements we recently released for Windows Azure.  I also delivered a keynote on June 7th that discussed and demoed the enhancements – you can watch a recorded version of it online. Over the next few weeks I’ll be doing several more speaking events about Windows Azure in North America and Europe.  Below are details on some of the upcoming the events and how you can sign-up to attend one in person: Scottsdale, Arizona on June 19th, 2012 Attend this FREE all-day event in Scottsdale, Arizona on Tuesday, June 19th to learn more about Windows Azure, ASP.NET, Web API and SignalR.  I’ll be doing a 2 hour presentation on Windows Azure, followed by Scott Hanselman on ASP.NET and Web API, and Brady Gaster on SignalR.  Learn more about the event and register to attend here. Cambridge, United Kingdom on June 21st, 2012 Attend this FREE two-hour event in Cambridge (UK) the evening of Thursday, June 21st.  I’ll be covering the new Windows Azure release – expects lots of demos and audience participation. Learn more about the event and register to attend here. London, United Kingdom on June 22nd, 2012 Attend the FREE all-day Microsoft Cloud Day conference in London (UK) on Friday, June 22nd to learn about Windows Azure and Windows 8.  I’ll be kicking off the event with a two hour keynote, and will be followed by some other fantastic speakers. Learn more about the conference and register to attend here. TechEd Europe in Amsterdam, Netherlands on June 26th, 2012 I’ll be at TechEd Europe this year where I’ll be presenting on Windows Azure.  I’ll be in the general session keynote and also have a foundation track session on Windows Azure on Tuesday, June 26th. Learn more about TechEd Europe and register to attend here. Amsterdam, Netherlands on June 26th, 2012 Not attending TechEd Europe but near Amsterdam and still want to see me talk?  The good news is that the leaders of the Windows Azure User Group NL have setup a FREE event during the evening of Tuesday, June 26th where I’ll be presenting along with Clemens Vasters. Learn more about the event and register to attend here. Dallas, Texas on July 10th, 2012 I’ll be in Dallas, Texas on Tuesday, July 10th and presenting at a FREE all day Microsoft Cloud Summit.  I’ll kick off the day with a keynote, which will be followed by a great set of additional Windows Azure talks as well as a “Grill the Gu” Q&A session with me over lunch. Learn more about the event and register to attend here. Additional Events I’ll be doing many more events and talks in the months ahead – I’ll blog details of additional conferences/events I’m doing as they are fixed. Hope to see some of you at the above ones! Scott

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >