Search Results

Search found 13000 results on 520 pages for 'member functions'.

Page 384/520 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • SQL SERVER – What are Actions in SSAS and How to Make a Reporting Action

    - by Pinal Dave
    Actions are used for customized browsing and drilling of data for the end-user. It’s an event that a user can raise while accessing the cube data. They are used in cube browsers like excel and are triggered when a user in a client tool clicks on a particular member, level, dimension, cells or may be the cube itself.  For example a user might be able to see a reporting services report, open a web page or drill through to detailed information related to the cube data. Analysis server supports 3 types of actions :- Report Drill-through Standard Actions In this blog post, I will explain the Reporting  action. The objective of this action is to return a report with details of the product where the sales amount is greater than 1000 in cube browser analysis. You need to create a basic cube first with the facts and dimensions you want in the analysis. Following are the steps to create reporting action. Go to SQL server data tools and open the analysis services project. Navigate to actions and click on new reporting action. 2.) Specify the name of the action and choose target type as attribute members since we have to create the action on members for a attribute. 3.) Specify the Target object of your report action. Target object would be the dimension or attribute on which you want the report to appear. In our case it is product name. 4.) Next you have to define the condition on which you want the report link to appear. However, this is an optional feature. In this example we are specifying a condition, which will check if the sales amount is greater than 10,000. So, that the link appears only for those products where the defined condition is met. 5.) Next you have to specify the server name on which the report is present, report path  and the report format in which you want the report to appear. 6.) Additionally you can specify the parameters. As with conditional expression, the parameters should be a valid MDX expression. The parameter name should be same as the one defined in the report. 7.) Deploy your solution after you are done with specifying parameters and go to the cube browser. 8.) Click on the analyze in excel button, this will open your cube in excel 9.) Make an analysis which shows product names and their sales amount. 10.) Right click on a product where sales amount is greater than 10000 you will see the reporting action link. Click on that and you will be taken to your reporting services report. 11.) Clicking on the link will take you to the URL of the report. I created this report using report project wizard in SQL server data tools. So, this is how we can launch reports from a cube browser. Similarly you can open web pages, run applications and a number of  other tasks. Koenig Solutions offers SSAS training which contains all Analysis Services including Reporting in great detail. In my next blog post I will talk about drill-through actions. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • Can one draw a cube using different method/drawing mode?

    - by den-javamaniac
    Hi. I've just started learning gamedev (in particular android EGL based) and have ran over a code from Pro Android Games 2 that looks as follows: /* * Copyright (C) 2007 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package opengl.scenes.cubes; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.IntBuffer; import javax.microedition.khronos.opengles.GL10; public class Cube { public Cube(){ int one = 0x10000; int vertices[] = { -one, -one, -one, one, -one, -one, one, one, -one, -one, one, -one, -one, -one, one, one, -one, one, one, one, one, -one, one, one, }; int colors[] = { 0, 0, 0, one, one, 0, 0, one, one, one, 0, one, 0, one, 0, one, 0, 0, one, one, one, 0, one, one, one, one, one, one, 0, one, one, one, }; byte indices[] = { 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; // Buffers to be passed to gl*Pointer() functions // must be direct, i.e., they must be placed on the // native heap where the garbage collector cannot vbb.asIntBuffer() // move them. // // Buffers with multi-byte datatypes (e.g., short, int, float) // must have their byte order set to native order ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length*4); vbb.order(ByteOrder.nativeOrder()); mVertexBuffer = vbb.asIntBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); ByteBuffer cbb = ByteBuffer.allocateDirect(colors.length*4); cbb.order(ByteOrder.nativeOrder()); mColorBuffer = cbb.asIntBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FIXED, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FIXED, 0, mColorBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); } private IntBuffer mVertexBuffer; private IntBuffer mColorBuffer; private ByteBuffer mIndexBuffer;} So it suggests to draw a cube using triangles. My question is: can I draw the same cube using GL_TPOLYGON? If so, isn't that an easier/more understandable way to do things?

    Read the article

  • Compute directional light frustum from view furstum points and light direction

    - by Fabian
    I'm working on a friends engine project and my task is to construct a new frustum from the light direction that overlaps the view frustum and possible shadow casters. The project already has a function that creates a frustum for this but its way to big and includes way to many casters (shadows) which can't be seen in the view frustum. Now the only parameter of this function are the normalized light direction vector and a view class which lets me extract the 8 view frustum points in world space. I don't have any additional infos about the scene. I have read some of the related Questions here but non seem to fit very well to my problem as they often just point to cascaded shadow maps. Sadly i can't use DX or openGl functions directly because this engine has a dedicated math library. From what i've read so far the steps are: Transform view frustum points into light space and find min/max x and y values (or sometimes minima and maxima of all three axis) and create a AABB using the min/max vectors. But what comes after this step? How do i transform this new AABB back to world space? What i've done so far: CVector3 Points[8], MinLight = CVector3(FLT_MAX), MaxLight = CVector3(FLT_MAX); for(int i = 0; i<8;++i){ Points[i] = Points[i] * WorldToShadowMapMatrix; MinLight = Math::Min(Points[i],MinLight); MaxLight = Math::Max(Points[i],MaxLight); } AABox box(MinLight,MaxLight); I don't think this is the right way to do it. The near plain probably has to extend into the direction of the light source to include potentional shadow casters. I've read the Microsoft article about cascaded shadow maps http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307%28v=vs.85%29.aspx which also includes some sample code. But they seem to use the scenes AABB to determine the near and far plane which I can't since i cant access this information from the funtion I'm working in. Could you guys please link some example code which shows the calculation of such frustum? Thanks in advance! Additional questio: is there a way to construct a WorldToFrustum matrix that represents the above transformation?

    Read the article

  • T-SQL Tuesday #025 &ndash; CHECK Constraint Tricks

    - by Most Valuable Yak (Rob Volk)
    Allen White (blog | twitter), marathoner, SQL Server MVP and presenter, and all-around awesome author is hosting this month's T-SQL Tuesday on sharing SQL Server Tips and Tricks.  And for those of you who have attended my Revenge: The SQL presentation, you know that I have 1 or 2 of them.  You'll also know that I don't recommend using anything I talk about in a production system, and will continue that advice here…although you might be sorely tempted.  Suffice it to say I'm not using these examples myself, but I think they're worth sharing anyway. Some of you have seen or read about SQL Server constraints and have applied them to your table designs…unless you're a vendor ;)…and may even use CHECK constraints to limit numeric values, or length of strings, allowable characters and such.  CHECK constraints can, however, do more than that, and can even provide enhanced security and other restrictions. One tip or trick that I didn't cover very well in the presentation is using constraints to do unusual things; specifically, limiting or preventing inserts into tables.  The idea was to use a CHECK constraint in a way that didn't depend on the actual data: -- create a table that cannot accept data CREATE TABLE dbo.JustTryIt(a BIT NOT NULL PRIMARY KEY, CONSTRAINT chk_no_insert CHECK (GETDATE()=GETDATE()+1)) INSERT dbo.JustTryIt VALUES(1)   I'll let you run that yourself, but I'm sure you'll see that this is a pretty stupid table to have, since the CHECK condition will always be false, and therefore will prevent any data from ever being inserted.  I can't remember why I used this example but it was for some vague and esoteric purpose that applies to about, maybe, zero people.  I come up with a lot of examples like that. However, if you realize that these CHECKs are not limited to column references, and if you explore the SQL Server function list, you could come up with a few that might be useful.  I'll let the names describe what they do instead of explaining them all: CREATE TABLE NoSA(a int not null, CONSTRAINT CHK_No_sa CHECK (SUSER_SNAME()<>'sa')) CREATE TABLE NoSysAdmin(a int not null, CONSTRAINT CHK_No_sysadmin CHECK (IS_SRVROLEMEMBER('sysadmin')=0)) CREATE TABLE NoAdHoc(a int not null, CONSTRAINT CHK_No_AdHoc CHECK (OBJECT_NAME(@@PROCID) IS NOT NULL)) CREATE TABLE NoAdHoc2(a int not null, CONSTRAINT CHK_No_AdHoc2 CHECK (@@NESTLEVEL>0)) CREATE TABLE NoCursors(a int not null, CONSTRAINT CHK_No_Cursors CHECK (@@CURSOR_ROWS=0)) CREATE TABLE ANSI_PADDING_ON(a int not null, CONSTRAINT CHK_ANSI_PADDING_ON CHECK (@@OPTIONS & 16=16)) CREATE TABLE TimeOfDay(a int not null, CONSTRAINT CHK_TimeOfDay CHECK (DATEPART(hour,GETDATE()) BETWEEN 0 AND 1)) GO -- log in as sa or a sysadmin server role member, and try this: INSERT NoSA VALUES(1) INSERT NoSysAdmin VALUES(1) -- note the difference when using sa vs. non-sa -- then try it again with a non-sysadmin login -- see if this works: INSERT NoAdHoc VALUES(1) INSERT NoAdHoc2 VALUES(1) GO -- then try this: CREATE PROCEDURE NotAdHoc @val1 int, @val2 int AS SET NOCOUNT ON; INSERT NoAdHoc VALUES(@val1) INSERT NoAdHoc2 VALUES(@val2) GO EXEC NotAdHoc 2,2 -- which values got inserted? SELECT * FROM NoAdHoc SELECT * FROM NoAdHoc2   -- and this one just makes me happy :) INSERT NoCursors VALUES(1) DECLARE curs CURSOR FOR SELECT 1 OPEN curs INSERT NoCursors VALUES(2) CLOSE curs DEALLOCATE curs INSERT NoCursors VALUES(3) SELECT * FROM NoCursors   I'll leave the ANSI_PADDING_ON and TimeOfDay tables for you to test on your own, I think you get the idea.  (Also take a look at the NoCursors example, notice anything interesting?)  The real eye-opener, for me anyway, is the ability to limit bad coding practices like cursors, ad-hoc SQL, and sa use/abuse by using declarative SQL objects.  I'm sure you can see how and why this would come up when discussing Revenge: The SQL.;) And the best part IMHO is that these work on pretty much any version of SQL Server, without needing Policy Based Management, DDL/login triggers, or similar tools to enforce best practices. All seriousness aside, I highly recommend that you spend some time letting your mind go wild with the possibilities and see how far you can take things.  There are no rules! (Hmmmm, what can I do with rules?) #TSQL2sDay

    Read the article

  • Cutting-Edge Demos Coming to Collaborate12

    - by mvaughan
    Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} By Kathy Miedema, Oracle Applications User Experience Are you building your Collaborate 2012 agenda? Leave room for a stop at the demogrounds while you’re in Las Vegas from April 22-26. In addition to several presentations on the Oracle user experience, the Applications User Experience (UX) team will be on the demo grounds with a new eye-tracking tool, as well as demos that showcase new user experience designs. Check out our cutting-edge technology, which we use to obtain feedback that helps improve the user experience of Oracle applications, and see what our next-generation designs are in the HCM and FIN user experiences.  Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Photo by Martin Taylor – Oracle Applications User Experience An Apps UX team member demonstrates what happens during an eye-tracking test. The dots on the screen show were test participants were looking and how long they spent at each point in the page. The UX team will also be staffing an on-site lab at Collaborate. At on-site labs, conference participants can sign up to join customer feedback sessions on several different kinds of work flow designs, from HCM to FIN to CRM to mobile. The feedback UX team members collect helps inform and fine-tune the user experiences being designed for next-generation applications. At Collaborate12, for example, user experience designs around Help and organizational charts will be tested for usability. The Apps UX team brings on-site labs to many major user group conferences, including OpenWorld 2012 in October in San Francisco. Stay tuned to find out when our recruiters are ready to sign up participants, or leave a comment below to find out whether an on-site lab will be at your next conference. For information on the following presentations, which will be delivered by Apps UX team members, check the Usable Apps Events page. • The Fusion Applications User Experience: Transforming Work into Insight • Customizations Under the Covers – Making Fusion Applications Your Own • OAUG Fusion Middleware SIG (FMWSIG) • 18 Months with Fusion Applications – Stories From The Trenhes • PeopleTools Tips and Techniques

    Read the article

  • Silverlight Cream for December 08, 2010 -- #1005

    - by Dave Campbell
    In this Issue: Peter Kuhn, David Anson, Jesse Liberty, Mike Taulty(-2-, -3-), Kunal Chowdhury, Jeremy Likness, Martin Krüger, Beth Massi(-2-, -3-)/ Above the Fold: Silverlight: "Rebuilding the PDC 2010 Silverlight Application (Part 1)" Mike Taulty WP7: "WP7: Glossy text block custom control" Martin Krüger Lightswitch: "How to Create a Screen with Multiple Search Parameters in LightSwitch" Beth Massi From SilverlightCream.com: Requirements of and pitfalls in Windows Phone 7 serialization Peter Kuhn discusses Data Contract Serializer issuses on WP7 and how to work around them. Managed implementation of CRC32 and MD5 algorithms updated; new release of ComputeFileHashes for Silverlight, WPF, and the command-line! David Anson ties up some loose ends from a prior post on hash functions, and updates his CRC32 and MD5 algorithms. Windows Phone From Scratch #9 – Visual State Jesse Liberty's latest Windows Phone from Scratch tutorial up... and is on the Visual State... he extends a Button and codes up the State Transitions. Rebuilding the PDC 2010 Silverlight Application (Part 1) Mike Taulty has taken the time to rebuild the PDC2010 Silverlight App that folks wanted the source for... and he's taking multiple posts to explain the heck out of it. This first one is mostly infrastructure. Rebuilding the PDC 2010 Silverlight Application (Part 2) In the 2nd post of the series, Mike Taulty is handling the In/Out of Browser business because he eventually is going to be going OOB. Rebuilding the PDC 2010 Silverlight Application (Part 3) Part 3 finds Mike Taulty delving into WCF Data Services and getting some data on the screen. Paginating Records in Silverlight DataGrid using PagedCollectionView Kunal Chowdhury continues with his investigation of the PagedCollectionView with this post on Pagination of your data. Old School Silverlight Effects If you haven't seen Jeremy Likness' 'Old School' Effects page yet, go just for the entertainment... you'll find yourself hanging around for the code :) WP7: Glossy text block custom control Martin Krüger's latest post is a very cool custom control for WP7 that displays Glossy text... it ain't Metro, but it looks pretty nice... some of it almost like etched text. How to Create a Screen with Multiple Search Parameters in LightSwitch Looks like Beth Massi got a few Lightswitch posts in while I wasn't looking. First up is this one on a multiple-parameter search screen. Adding Static Images and Logos to LightSwitch Applications In the 2nd post, Beth Massi shows how to add your own static images and logos to Lightswitch apps... in response to reader questions. Getting the Most out of LightSwitch Summary Properties In her latest post, Beth Massi explains what Summary Properties are in Lightswitch and how to use them to get the best results for your users. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • OTN ArchBeat Top 10 for September 2012

    - by Bob Rhubart
    The results are in... Listed below are the Top 10 most popular items shared via the OTN ArchBeat Facebook Page for the month of September 2012. The Real Architects of Los Angeles - OTN Architect Day - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. Oracle 11gR2 RAC on Software Defined Network (SDN) (OpenvSwitch, Floodlight, Beacon) | Gilbert Stan "The SDN [software defined network] idea is to separate the control plane and the data plane in networking and to virtualize networking the same way we have virtualized servers," explains Gil Standen. "This is an idea whose time has come because VMs and vmotion have created all kinds of problems with how to tell networking equipment that a VM has moved and to preserve connectivity to VPN end points, preserve IP, etc." H/T to Oracle ACE Director Tim Hall for the recommendation. Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book,"says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" OIM-OAM-OAAM integration using TAP – Request Flow you must understand!! | Atul Kumar Atul Kumar's post addresses "key points and request flow that you must understand" when integrating three Oracle Identity Management product Oracle Identity Management, Oracle Access Management, and Oracle Adaptive Access Manager. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace Sections. Thought for the Day "Eventually everything connects - people, ideas, objects. The quality of the connections is the key to quality per se." — Charles Eames (June 17, 1907 – August 21, 1978) Source: SoftwareQuotes.com

    Read the article

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • Strengthening code with possibly useless exception handling

    - by rdurand
    Is it a good practice to implement useless exception handling, just in case another part of the code is not coded correctly? Basic example A simple one, so I don't loose everybody :). Let's say I'm writing an app that will display a person's information (name, address, etc.), the data being extracted from a database. Let's say I'm the one coding the UI part, and someone else is writing the DB query code. Now imagine that the specifications of your app say that if the person's information is incomplete (let's say, the name is missing in the database), the person coding the query should handle this by returning "NA" for the missing field. What if the query is poorly coded and doesn't handle this case? What if the guy who wrote the query handles you an incomplete result, and when you try to display the informations, everything crashes, because your code isn't prepared to display empty stuff? This example is very basic. I believe most of you will say "it's not your problem, you're not responsible for this crash". But, it's still your part of the code which is crashing. Another example Let's say now I'm the one writing the query. The specifications don't say the same as above, but that the guy writing the "insert" query should make sure all the fields are complete when adding a person to the database to avoid inserting incomplete information. Should I protect my "select" query to make sure I give the UI guy complete informations? The questions What if the specifications don't explicitly say "this guy is the one in charge of handling this situation"? What if a third person implements another query (similar to the first one, but on another DB) and uses your UI code to display it, but doesn't handle this case in his code? Should I do what's necessary to prevent a possible crash, even if I'm not the one supposed to handle the bad case? I'm not looking for an answer like "(s)he's the one responsible for the crash", as I'm not solving a conflict here, I'd like to know, should I protect my code against situations it's not my responsibility to handle? Here, a simple "if empty do something" would suffice. In general, this question tackles redundant exception handling. I'm asking it because when I work alone on a project, I may code 2-3 times a similar exception handling in successive functions, "just in case" I did something wrong and let a bad case come through.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for September 9-15, 2012

    - by Bob Rhubart
    The Top 10 most-viewed items shared on the OTN ArchBeat Facebook page for the week of September 9-15, 2017. 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Attend OTN Architect Day – by Architects, for Architects – October 25 You won't need 3D glasses to take in these live presentations (8 sessions, two tracks) on Cloud computing, SOA, and engineered systems. And the ticket price is: Zero. Nothing. Absolutely free. Register now for Oracle Technology Network Architect Day in Los Angeles. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles , 8555 Beverly Boulevard , Los Angeles, CA 90048. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions—inside and outside the enterprise." Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle IAM 11g R2 docs are now available "One of the great things about the new doc set is the inclusion of ePub files," says Fusion Middleware A-Team blogger Chris Johnson. "This means that if you have an iPad you can load up the doc library onto that and read the docs on the couch." Setting up a local Yum Server using the Exalogic ZFS Storage Appliance | Donald A concise technical post from the man named Donald. What's New in Oracle VM VirtualBox 4.2? | The Fat Bloke Sings "One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's," says The Fat Bloke. "Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends." The new VirtualBox release, a year in the making, addresses the needs of these users, he explains. Configuring Oracle Business Intelligence 11g MDS XML Source Control Management with Git Version Control | Christian Screen Oracle ACE Christian Screen developed this tutorial for those interested in learning how to configure the Oracle Business Intelligence 11g (11.1.1.6) metadata repository for development using the new MDS XML source control management functionality. Identity and Access Management at Oracle Open World 2012 | Brian Eidelman Fusion Middleware A-Team blogger Brian Eideleman highlights three Oracle Openworld sessions that will put Identity and Access Management in the spotlight, and shares a link to the "Focus On: Identity Management" document, a comprehensive listing of Openworld activities also dealing with IM. Starting and stopping WebLogic automatically using Upstart | Chris Johnson "In Ubuntu, RedHat and Oracle Linux there's a new flavor of init called Upstart that all the kids are using," says Oracle Fusion Middleware A-Team member Chris Johnson. "It's the new hotness when it comes to making programs into daemons and wiring them to start and stop at appropriate times." Thought for the Day "The purpose of software engineering is to control complexity, not to create it." — Pamela Zave Source: SoftwareQuotes.com

    Read the article

  • SOA Community Newsletter May 2014

    - by JuergenKress
    Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! If you can’t make it to Lisbon attend our SOA Suite 11c free on-demand Bootcamp or  Managing the Complexity of IoT online trainings. With more than 5000 customers, SOA Suite Achieves Significant Customer Adoption and Industry Recognition.Thanks to all our SOA Specialized partners for making our joins SOA customers successful! As a summary of the Industrial SOA series we published the Podcast Show Notes: SOA and Cloud - Where's This Relationship Going? Make sure you use the Oracle Demo Systems for your customer presentations. The demo systems are hosted by Oracle and include complete scenarios based on the latest Middleware version like the new B2B SOA Suite Demo System! For local presentations without fast internet use the SOA/BPM 11.1.1.7.1 Virtual Machine and Case Management Sample. At our SOA Community Workspace (SOA Community membership required) you can get new IoT presentations for Location Based Offers for Banking & Whitepaper and online Webcast & Utility presentation. In this newsletter you will find many articles about OSB: OSB 11g – A Hands-on Tutorial & Using Split-Joins in OSB Services for parallel processing of messages & OSB, Service Callouts and OQL & Working with Oracle Security Token Service. Thanks for sharing all the additional SOA articles within the community: How to configure Oracle SOA/BPM task auto release & Controlling BPEL process flow at runtime & Upgrading to Oracle SOA Suite 11g PS6 (11.1.1.7)? Do this. & BPEL and BPM's performance monitoring using DMS & SOA 11g - Create RESTful Service In Oracle SOA & Wrong timezone causes TopLink warning in SOA suite. Highlight of the BPM and ACM section is the IDC BPM vendor report. The new bundle Patch including the ACM UI is now available. If you want to learn more about ACM, get the ACM training material at our SOA Community Workspace (SOA Community membership required). A great demo for your next BPM presentation is the BPM iPad app. It’s simpleMobile BPM is Not An Option. It’s a Necessity. Thanks for sharing all the additional BPM articles within the community: BPM update adds Case Management Web Interface and REST APIs & Implementing deadline functionality with Oracle Adaptive Case Management & BPM 11g Timeout Heuristics & Humantask Assignment: Names and Expressions Assignment via Rules. In our last section Architecture, it is all about design. Usability is a key factor for customer satisfaction, worth to spend some time and read the Simplified User Experience Design Patterns eBook. Great blueprint for your project! See you in Lisbon! To read the newsletter please visit www.tinyurl.com/soaNewsMay2014 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Get to Know a Candidate (8 of 25): Rocky Anderson&ndash;Justice Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Ross Carl “Rocky” Anderson served two terms as the 33rd mayor of Salt Lake City, Utah, between 2000 and 2008.  He is the Executive Director of High Road for Human Rights.  Prior to serving as Mayor, he practiced law for 21 years in Salt Lake City, during which time he was listed in Best Lawyers in America, was rated A-V (highest rating) by Martindale-Hubbell, served as Chair of the Utah State Bar Litigation Section[4] and was Editor-in-Chief of, and a contributor to, Voir Dire legal journal. As mayor, Anderson rose to nationwide prominence as a champion of several national and international causes, including climate protection, immigration reform, restorative criminal justice, LGBT rights, and an end to the "war on drugs". Before and after the invasion by the U.S. of Iraq in 2003, Anderson was a leading opponent of the invasion and occupation of Iraq and related human rights abuses. Anderson was the only mayor of a major U.S. city who advocated for the impeachment of President George W. Bush, which he did in many venues throughout the United States. Anderson's work and advocacy led to local, national, and international recognition in numerous spheres, including being named by Business Week as one of the top twenty activists in the world on climate change,serving on the Newsweek Global Environmental Leadership Advisory Board, and being recognized by the Human Rights Campaign as one of the top ten straight advocates in the United States for LGBT equality. He has also received numerous awards for his work, including the EPA Climate Protection Award, the Sierra Club Distinguished Service Award, the Respect the Earth Planet Defender Award, the National Association of Hispanic Publications Presidential Award, The Drug Policy Alliance Richard J. Dennis Drugpeace Award, the Progressive Democrats of America Spine Award, the League of United Latin American Citizens Profile in Courage Award, the Bill of Rights Defense Committee Patriot Award, the Code Pink (Salt Lake City) Pink Star honor, the Morehouse University Gandhi, King, Ikeda Award, and the World Leadership Award for environmental programs. Formerly a member of the Democratic Party, Anderson expressed his disappointment with that Party in 2011, stating, “The Constitution has been eviscerated while Democrats have stood by with nary a whimper. It is a gutless, unprincipled party, bought and paid for by the same interests that buy and pay for the Republican Party." Anderson announced his intention to run for President in 2012 as a candidate for the newly-formed Justice Party. Although founded by Rocky Anderson of Utah, the Justice Party was first recognized by Mississippi and describes itself as advocating economic justice through measures such as green jobs and a right to organize, environment justice through enforcing employee safeguards in trade agreements, and social and civic justice through universal health care. In its first press release, the Utah Justice Party set forth its goals for justice in the economic, environmental, social and civic realms, along with a call to rid the corrupting influence of big money from government, to reverse the erosion of rights guaranteed by the Constitution, and to stop draining American resources to support illegal wars of aggression. Its press release says its grassroots supporters believe that now is the time for all to "shed their skeptical view that their voices don't matter", that "our 2-party system is a 'duopoly' controlled by the same corporate and military interests", and that the people must act to ensure "that our nation will achieve a brighter, sustainable future.” Anderson has ballot access in CO, CT, FL, ID, LA, MI, MN, MS, NJ, NM, OR, RI, TN, UT, VT, WA (152 electoral votes) and has write-in access in AL, AK, DE, GA, IL, IO, KS, MD, MO, NE, NH, NY, PA, TX Learn more about Rocky Anderson and Justice Party on Wikipedia.

    Read the article

  • Standards Corner: Preventing Pervasive Monitoring

    - by independentid
     Phil Hunt is an active member of multiple industry standards groups and committees and has spearheaded discussions, creation and ratifications of industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF. Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce Schneier, Brian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF. Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones. Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM. The draft goes on to further qualify what it means by “attack”, clarifying that  The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.  The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices. 

    Read the article

  • TypeScript first impressions

    - by Bertrand Le Roy
    Anders published a video of his new project today, which aims at creating a superset of JavaScript, that compiles down to regular current JavaScript. Anders is a tremendously clever guy, and it always shows in his work. There is much to like in the enterprise (good code completion, refactoring and adoption of the module pattern instead of namespaces to name three), but a few things made me rise an eyebrow. First, there is no mention of CoffeeScript or Dart, but he does talk briefly about Script# and GWT. This is probably because the target audience seems to be the same as the audience for the latter two, i.e. developers who are more comfortable with statically-typed languages such as C# and Java than dynamic languages such as JavaScript. I don’t think he’s aiming at JavaScript developers. Classes and interfaces, although well executed, are not especially appealing. Second, as any code generation tool (and this is true of CoffeeScript as well), you’d better like the generated code. I didn’t, unfortunately. The code that I saw is not the code I would have written. What’s more, I didn’t always find the TypeScript code especially more expressive than what it gets compiled to. I also have a few questions. Is it possible to duck-type interfaces? For example, if I have an IPoint2D interface with x and y coordinates, can I pass any object that has x and y into a function that expects IPoint2D or do I need to necessarily create a class that implements that interface, and new up an instance that explicitly declares its contract? The appeal of dynamic languages is the ability to make objects as you go. This needs to be kept intact. More technical: why are generated variables and functions prefixed with _ rather than the $ that the EcmaScript spec recommends for machine-generated variables? In conclusion, while this is a good contribution to the set of ideas around JavaScript evolution, I don’t expect a lot of adoption outside of the devoted Microsoft developers, but maybe some influence on the language itself. But I’m often wrong. I would certainly not use it because I disagree with the central motivation for doing this: Anders explicitly says he built this because “writing application-scale JavaScript is hard”. I would restate that “writing application-scale JavaScript is hard for people who are used to statically-typed languages”. The community has built a set of good practices over the last few years that do scale quite well, and many people are successfully developing and maintaining impressive applications directly in JavaScript. You can play with TypeScript here: http://www.typescriptlang.org

    Read the article

  • New features in TFS Demo Setup 1.0.0.2

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/ | Download | Source Code | Report a Bug | Ideas Just pushed out the 2nd release of the TFS Demo setup on CodePlex, below a quick look at some of the new features/improvements in the tool… Details of the existing features can be found here. Feature 1 – Set up Work Items Queries as Team Favorites The task board looks cooler when the team favourite work item queries show up on the task board. The demo setup console application now has the ability to set up the work item queries as team favorites for you. If you want to see how you can add Team Favorites programmatically, refer to this blogpost here. Image 1 – Task board without Team Favorites Let’s see how the TFS Demo Setup application sets-up team favorites as part of the run… Open up the DemoDictionary.xml and you should be able to see the new node <TeamFavorites> this accepts multiple <TeamFavorite>. You simply need to specify the <Type> as Query and in the <Name> specify the name of the work item query that you would like added as a favorite. Image 2 – Highlighting the TeamFavorites block in DemoDictionary.xml So, when the demo set up application is run with the above config, work item queries “Blocked Tasks” and “Open Impediments” are added as team favorites. They then show up on the task board, as highlighted in the screen shot below. Image 3 – Team Favorites setup during the TFS demo setup app execution Feature 2 – Choose what you want to setup and exclude the rest I had a great feature request come in requesting the ability to exclude parts of the setup at the sole discretion of the executioner. To accommodate this, I have added an attribute with each block, the attribute “Run” accepts “true” or “false”. If you set the flag to true then at the time of execution that block would be considered for setup and if you set the flag to false, the block will be ignored during the setup. So, lets look at an example below… The attribute "Run” is set to true for TeamSettings, Team Favorites, TeamMembers and WorkItems. So, all of these would be setup as part of the demo setup application execution. Image 4 – New Attribute Run added to all blocks in DemoDictionary.xml If I did not want to recreate the team and did not want to add new work items but only wanted to add favorites and team members to the existing team “AgileChamps1” then I could simple run the application with below DemoDictionary.xml. Note – TeamSettings Run=”false” and WorkItems Run=”false”. Image 5 – TeamFavorites and TeamMembers set as true and others set to false Feature 3 – Usability Improvement If you try and assign a work item to a team member that does not exist then the application throws a nasty exception. This behaviour has now been changed, upon adding such a work item, the work items will be created and not assigned to any user. The work item id will be printed to the console making it simple for you to assign the work item manually. As you can see in the screen shot below, I am trying to assign the work item to a user “Tarun” and a user “v2” both are *not valid users in my team project collection* so the tool creates the work items and provides me the work item id and lets me know that since the user is invalid the work item could not be assigned to the user. Better user experience ae Image 6 – Behaviour if work item assigned to users are in valid users in team project That’s about it for the current release. I have some new features planned for the next release. Mean while if you have any ideas/comments please feel free to leave a comment. Stay tuned for more… Enjoy! Other posts on TFS Demo Setup can be found here.

    Read the article

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

  • Your interesting code tricks/ conventions? [closed]

    - by Paul
    What interesting conventions, rules, tricks do you use in your code? Preferably some that are not so popular so that the rest of us would find them as novelties. :) Here's some of mine... Input and output parameters This applies to C++ and other languages that have both references and pointers. This is the convention: input parameters are always passed by value or const reference; output parameters are always passed by pointer. This way I'm able to see at a glance, directly from the function call, what parameters might get modified by the function: Inspiration: Old C code int a = 6, b = 7, sum = 0; calculateSum(a, b, &sum); Ordering of headers My typical source file begins like this (see code below). The reason I put the matching header first is because, in case that header is not self-sufficient (I forgot to include some necessary library, or forgot to forward declare some type or function), a compiler error will occur. // Matching header #include "example.h" // Standard libraries #include <string> ... Setter functions Sometimes I find that I need to set multiple properties of an object all at once (like when I just constructed it and I need to initialize it). To reduce the amount of typing and, in some cases, improve readability, I decided to make my setters chainable: Inspiration: Builder pattern class Employee { public: Employee& name(const std::string& name); Employee& salary(double salary); private: std::string name_; double salary_; }; Employee bob; bob.name("William Smith").salary(500.00); Maybe in this particular case it could have been just as well done in the constructor. But for Real WorldTM applications, classes would have lots more fields that should be set to appropriate values and it becomes unmaintainable to do it in the constructor. So what about you? What personal tips and tricks would you like to share?

    Read the article

  • Helper method to Replace/Remove characters that do not match the Regular Expression

    - by Michael Freidgeim
    I have a few fields, that use regEx for validation. In case if provided field has unaccepted characters, I don't want to reject the whole field, as most of validators do, but just remove invalid characters. I am expecting to keep only Character Classes for allowed characters and created a helper method to strip unaccepted characters. The allowed pattern should be in Regex format, expect them wrapped in square brackets. function will insert a tilde after opening squere bracket , according to http://stackoverflow.com/questions/4460290/replace-chars-if-not-match.  [^ ] at the start of a character class negates it - it matches characters not in the class.I anticipate that it could work not for all RegEx describing valid characters sets,but it works for relatively simple sets, that we are using.         /// <summary>               /// Replaces  not expected characters.               /// </summary>               /// <param name="text"> The text.</param>               /// <param name="allowedPattern"> The allowed pattern in Regex format, expect them wrapped in brackets</param>               /// <param name="replacement"> The replacement.</param>               /// <returns></returns>               /// //        http://stackoverflow.com/questions/4460290/replace-chars-if-not-match.               //http://stackoverflow.com/questions/6154426/replace-remove-characters-that-do-not-match-the-regular-expression-net               //[^ ] at the start of a character class negates it - it matches characters not in the class.               //Replace/Remove characters that do not match the Regular Expression               static public string ReplaceNotExpectedCharacters( this string text, string allowedPattern,string replacement )              {                     allowedPattern = allowedPattern.StripBrackets( "[", "]" );                      //[^ ] at the start of a character class negates it - it matches characters not in the class.                      var result = Regex .Replace(text, @"[^" + allowedPattern + "]", replacement);                      return result;              }static public string RemoveNonAlphanumericCharacters( this string text)              {                      var result = text.ReplaceNotExpectedCharacters(NonAlphaNumericCharacters, "" );                      return result;              }        public const string NonAlphaNumericCharacters = "[a-zA-Z0-9]";There are a couple of functions from my StringHelper class  http://geekswithblogs.net/mnf/archive/2006/07/13/84942.aspx , that are used here.    //                           /// <summary>               /// 'StripBrackets checks that starts from sStart and ends with sEnd (case sensitive).               ///           'If yes, than removes sStart and sEnd.               ///           'Otherwise returns full string unchanges               ///           'See also MidBetween               /// </summary>               /// <param name="str"></param>               /// <param name="sStart"></param>               /// <param name="sEnd"></param>               /// <returns></returns>               public static string StripBrackets( this string str, string sStart, string sEnd)              {                      if (CheckBrackets(str, sStart, sEnd))                     {                           str = str.Substring(sStart.Length, (str.Length - sStart.Length) - sEnd.Length);                     }                      return str;              }               public static bool CheckBrackets( string str, string sStart, string sEnd)              {                      bool flag1 = (str != null ) && (str.StartsWith(sStart) && str.EndsWith(sEnd));                      return flag1;              }               public static string WrapBrackets( string str, string sStartBracket, string sEndBracket)              {                      StringBuilder builder1 = new StringBuilder(sStartBracket);                     builder1.Append(str);                     builder1.Append(sEndBracket);                      return builder1.ToString();              }v

    Read the article

  • JavaScript: Code Folding

    - by Petr
    Today I would like to mentioned code folding in the new JavaScript editor support, which is available in the continual builds from our server. It's a basic feature, but was mentioned in a comment under the mentioned post. So you can fold comments and every code block between { and }. The current support allows only methods to be folded. The difference is shown below. In the picture on the left side is the current folding and on the right side the new one.   The code folding can be switched off in the Editor Options (Tools main menu -> Options -> Editor category -> General Tab). In this dialog you can also define which folds should be collapsed by default when you open a file. These options more closely fit Java editor needs, but you can see in the next picture how the options are mapped for JavaScript code.  The Method option folds all functions in the code. Other code blogs are fold through the option Tags and Other Code Blogs.  The documentation comments (starts with /**) are fold through Javadoc Comments and when you check Initial Comment, then all comments that start with /* are folded by default.  The new JavaScript editor also supports custom folds. To add your custom fold, type in two special comments as shown in this example: // <editor-fold> Your code goes here... // </editor-fold> You can define the default description of a collapsed fold by adding a "desc" attribute: // <editor-fold desc="This is my super secret genius code."> Your code goes here... // </editor-fold> You can set a fold to be collapsed by default by adding a "defaultstate" attribute: // <editor-fold defaultstate="collapsed"> Your code goes here... // </editor-fold> There is a code template that helps with writing custom fold comments. The abbreviation for the template is fcom. As I wrote the new JS support is available in the continual builds. Go here for more info.

    Read the article

  • SQL SERVER – Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue

    - by pinaldave
    There is an old quote “A Picture is Worth a Thousand Words”. I believe this quote immensely. Quite often I get phone calls that something is not working if I can help. My reaction is in most of the cases, I need to know more, send me exact error or a screenshot. Until and unless I see the error or reproduce the scenario myself I prefer not to comment. Yesterday I got a similar phone call from an old friend, where he was not sure what is going on. Here is what he said. “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it? “ My immediate reaction was he was facing security and permission issue. However, to make the same recommendation I suggested that he send me a screenshot of his own SSMS and his friend’s SSMS. After carefully looking at both the screenshots, I was very confident about the issue and we were able to resolve the issue. Let us reproduce the same scenario and many there is some learning for us. Issue: User not able to see user created objects First let us see the image of my friend’s SSMS screen. (Recreated on my machine) Now let us see my friend’s colleague SSMS screen. (Recreated on my machine) You can see that my friend could not see the user tables but his colleague was able to do the same for sure. Now I believed it was a permissions issue. Further to this I asked him to send me another image where I can see the various permissions of the user in the database. My friends screen My friends colleagues screen This indeed proved that my friend did not have access to the AdventureWorks database and because of the same he was not able to access the database. He did have public access which means he will have similar rights as guest access. However, their SQL Server had followed my earlier advise on having limited access for guest access, which means he was not able to see any user created objects. My next question was to validate what kind of access my friend’s colleague had. He replied that the colleague is the admin of the server. I suggested that if my friend was suppose to have admin access to the database, he should request of having admin access to his colleague. My friend promptly asked for the same to his colleague and on following screen he added him as an admin. You can do the same using following T-SQL script as well. USE [AdventureWorks2012] GO ALTER ROLE [db_owner] ADD MEMBER [testguest] GO Once my friend was admin he was able to access all the user objects just like he was expecting. Please note, this complete exercise was done on a development server. One should not play around with security on live or production server. Security is such an issue, which should be left with only senior administrator of the server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Template Browser – A Very Important and Useful Feature of SSMS

    - by pinaldave
    Let me start today’s blog post with a direction question. How many of you have ever used Template Browser? Template Browser is a very important and useful feature of SQL Server Management Studio (SSMS). Every time when I am talking about SQL Server there is always someone comes up with the question, why there is no step by step procedure included in SSMS for features. Honestly every time I get this question, the question I ask back is How many of you have ever used Template Browser? I think the answer to this question is most of the time either no or we have not heard of the feature. One of the people asked me back – have you ever written about it on your blog? I have not yet written about it. Basically there is nothing much to write about it. It is pretty straight forward feature, like any other feature and it is indeed difficult to elaborate. However, I will try to give a quick introduction to this feature. Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. Additionally users can create new custom templates as well with folder structure. To open a template from Template Explorer Go to View menu >> Template Explorer or type CTRL+ALT+L. You will find a list of categories click on any category and expand the folder structure. For our sample example let us expand Index Folder. In this folder you will notice the various T-SQL Scripts. These scripts can be opened by double click or can be dragged to editor area and modified as needed. Sample template is now available in the query editor area with all the necessary parameter place folder. You can replace the same parameter by typing either CTRL+SHIFT+M or by going to Query Menu >> Specify Values for Template Parameters. In this screen it will show  Specify Values for Template Parameters dialog box, accept the value or replace it with a new value. This will now get your script ready to go. Check it one more time and change the script to fit your requirement. I personally use template explorer for two things. First one is obviously for templates but the hidden one and an important one is for learning new features and T-SQL commands. There is so much to learn and so little time. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >