Search Results

Search found 1781 results on 72 pages for 'steve rivera'.

Page 13/72 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Physical address and contact details in Meta tags

    - by Steve
    I remember someone once saying it was useful (for SEO purposes) to include your address and contact details in the site's Meta tags. I don't recall if he meant in in the meta description or in another type of meta tag. Have you hear of this strategy before? I believe the person implied that Google would tie all of your Local Listings together for a better SEO score if you listed matching contact details in your meta tags.

    Read the article

  • Application qos involving priority and bandwidth

    - by Steve Peng
    Our manager wants us to do applicaiton qos which is quite different from the well-known system qos. We have many services of three types, they have priorites, the manager wants to suspend low priority services requests when there are not enough bandwidth for high priority services. But if the high priority services requests decrease, the bandwidth for low priority services should increase and low priority service requests are allowed again. There should be an algorithm involving priority and bandwidth. I don't know how to design the algorithm, is there any example on the internet? Somebody can give suggestion? Thanks. UPDATE All these services are within a same process. We are setting the maximum bandwidth for the three types of services via ports of services via TC (TC is the linux qos tool whose name means traffic control).

    Read the article

  • Setting jQuery after ASP.net AJAX partial post back

    - by Steve Clements
    OK, so for some reason you have a mega mashup solution with ASP.net AJAX, jQuery and web forms.  Perhaps you are just on the migration from AjaxControlToolkit to the jQuery UI framework – who knows!! Anyway, the problem is that when you post back with something like an UpdatePanel, you will find that your nicely setup jQuery stuff, like the datepicker for example will no longer work. You may have something like this… $(document).ready(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   When you’re ASP.net UpdatePanel post back, you will find that your datepicker has gone.  Bugger! Well you need to add this little gem to set it back up again once the UpdatePanel comes back to the page. var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   Or like me, you would have a javascript function, something like InitPage(); do all your work in there and call that on document.ready and endRequest. Your choice…you have the power   Share this post :

    Read the article

  • Conditionally Auto-Executing af:query Search Form Based on User Input

    - by steve.muench
    Due to extreme lack of time due to other work priorities -- working hard on some interesting new ADF features for a future major release -- 2010 has not been a banner year for my production of samples to post to my blog, but to show my heart is in the right place I wanted to close out the year by posting example# 160: 160. Conditionally Auto-Executing af:query Search Form Based on User Input Enjoy. Happy New Year.

    Read the article

  • Visual Studio 2010 Pro Power Tools Screencast

    - by Steve Michelotti
    Microsoft just released the Visual Studio 2010 Pro Power Tools extension and it is awesome. A summary of all the features can be found here and it is available in the Visual Studio Gallery here. There are a bunch of great features but, in my opinion, the best one is the replacement for the Add Reference dialog. This gives sub-string search capabilities as well as the ability to add multiple references without having to continually re-open the dialog. For this feature alone, you should install the Pro Power Tools right now. There are a few blogs posts that do a good job describing all the features but what I wanted to do here was to post a quick screencast (7 minutes) that shows the features that I think are really cool. I show most (but not all) of the features focusing on the ones I think are the best. The features I cover are: Installation with the Extension Manager Add Reference Dialog replacement Tab Well including pinned tabs, pinned tabs in second row, fixed close button, colorized tabs, dirty indicator Highlight current line Triple Click for full-line selection Ctrl + Click for Go To Definition Colorized Parameter Help Enjoy! (Right-click and Zoom to view in full screen)

    Read the article

  • Why isn't Startup Disk Creator working in 12.04?

    - by Steve Kelem
    I'm trying to create a bootable USB stick (7.5G) for Ubuntu 12.04 (x86_64) from another Ubuntu 12.04 x86_64 PC. I downloaded the Ubuntu 12.04 LTS "Precise Pangolin" - Release amd64 (20120425). When I run Make Startup Disk, I selected the downloaded release. The drive shows up with a capacity of 7.5GB and a blank space under "Free Space". I have tried using the "Erase Disk" button, which seems to erase the disk. The problem is that the options below the "Disk to use" section are grayed out. The "Make Startup Disk" is colored dull orange, while the source disc image and device to use are bright orange. The "Make Startup Disk" button doesn't do anything when I click it. The only working buttons are "Other...", "Erase Disk", and "Close". Upon using Other button to select the ISO, it allows to select the ISO but it doesn't load and the "Source Disk Image" field remains empty.

    Read the article

  • Mobile Apps: An Ongoing Revolution

    - by Steve Walker
    a guest post from Suhas Uliyar, VP Mobile Strategy, Product Management, Oracle The rise of smartphone apps have proved transformational for businesses, increasing the productivity of employees while simultaneously creating some seriously cool end user experiences. But this is a revolution that is only just beginning. Over the next few years, apps will change everything about the way enterprises work as well as overhauling the experiences of customers. The spark for this revolution is simplicity. Simplicity has already proved important for the front-end of apps, which are now often as compelling and intuitive as consumer apps. Businesses will encourage this trend, both to further increase employee productivity and to attract ‘digital natives’ (as employees and customers). With the variety of front-end development tools available already, this should be a simple mission for developers to accomplish – but front-end simplicity alone is not enough for the enterprise mobile revolution. Without the right content even the most user-friendly app is useless. Yet when it comes to integrating apps with ‘back-end’ systems to enable this content, developers often face a complex, costly and time-consuming task. Then there is security: how can developers strike a balance between complying with enterprise security policies and keeping the user experience simple? Complexity has acted as a brake on innovation, with integration and security compliance swallowing enterprise resources. This is why the simplification of integration, security and scalability is so important: it frees time and money for revolutionary innovation. The key is to put in place a complete and unified SOA integration platform that runs across the entire enterprise and enables organizations to easily integrate and connect applications across IT environments. The platform must also be capable of abstracting apps from the underlying OS and enabling a ‘write-once, run- anywhere’ capability for mobile devices - essential for BYOD environments and integrating third-party apps. Mobile Back-end-as-a-Service can also be very important in streamlining back-end integration. Mobile services offered through the cloud can simplify mobile application development with a standard approach to dealing with complex server-side programming and integration issues. This allows the business to innovate at its own pace while providing developers with a choice of tools to speed development and integration. Finally, there is security, which must be done in a way that encourages users to make the most of their mobile devices and applications. As mobile users, we want convenience and that is why we generally approve of businesses that adopt BYOD policies. Enterprises can safely encourage BYOD as they can separate, protect, and wipe corporate applications by installing a secure ‘container’ around corporate applications on any mobile device. BYOD management also means users’ personal applications and data can be kept separate from the enterprise information – giving them the confidence they need to embrace the use of their devices for corporate apps. Enterprises that place mobility at the heart of what they do will fundamentally transform their businesses and leap ahead of the competition. As businesses take to mobile platforms that simplify integration, security and scalability we will see a blossoming of innovation that will drive new levels of user convenience and create new ways of working that we are only beginning to imagine.

    Read the article

  • AdWords Keyword Tool planner CPC completely different to real CPC?

    - by steve
    I'm new to AdWords, and trying to figure out the best keywords to use. I go to Adwords Keyword Planner, and typed in an example keyword. It gives me an average CPC of $0.94. But when I go to set up a real campaign and type it the same keyword, I get an error saying 'below first page bid estimate' which is $8.75. What gives? Is there a better way to get more accurate feedback on how much this will cost?

    Read the article

  • [OT] : Tagged : SQL Dream Cars

    - by AaronBertrand
    Steve Jones ( blog | twitter ) posted an entry today called "SQL Dream Cars," where he talks briefly about 5 of the cars he would love to own. He then tagged a few of us to share our lists. Before I get to mine, I wanted to reflect a bit on one of Steve's choices, the Ferrari 308 GTS. I remember when I was a kid, maybe 10 or 11 years old - after Magnum PI made that Ferrari 308 so popular - that a local doctor in North Bay had one. His name was Dr. Fazzarri (and I apologize if I've spelled that wrong);...(read more)

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Local Business Listing Dashboard

    - by Steve
    I operate a website for a West Australian company, and the company has listings in local business directory websites. Currently we are listed in: www.HotFrog.com.au www.Google.com/Places www.TrueLocal.com.au www.StartLocal.com.au www.localstore.com.au www.communityguide.com.au www.yelp.com.au www.aussieweb.com.au Do you know of any method for examining account stats (profile views, profile clicks etc) within a dashboard, so I can see at a glance how each of our listings is going? I'd be happy to build a dashboard if necessary, but I'm not confident I currently have the skills to accurately display only the correct concise information. Would I use iframes? See if they have APIs? Is there a dashboard framework I could use?

    Read the article

  • Handy Generic JQuery Functions

    - by Steve Wilkes
    I was a bit of a late-comer to the JQuery party, but now I've been using it for a while it's given me a host of options for adding extra flair to the client side of my applications. Here's a few generic JQuery functions I've written which can be used to add some neat little features to a page. Just call any of them from a document ready function. Apply JQuery Themeroller Styles to all Page Buttons   The JQuery Themeroller is a great tool for creating a theme for a site based on colours and styles for particular page elements. The JQuery.UI library then provides a set of functions which allow you to apply styles to page elements. This function applies a JQuery Themeroller style to all the buttons on a page - as well as any elements which have a button class applied to them - and then makes the mouse pointer turn into a cursor when you mouse over them: function addCursorPointerToButtons() {     $("button, input[type='submit'], input[type='button'], .button") .button().css("cursor", "pointer"); } Automatically Remove the Default Value from a Select Box   Required drop-down select boxes often have a default option which reads 'Please select...' (or something like that), but once someone has selected a value, there's no need to retain that. This function removes the default option from any select boxes on the page which have a data-val-remove-default attribute once one of the non-default options has been chosen: function removeDefaultSelectOptionOnSelect() {     $("select[data-val-remove-default='']").change(function () {         var sel = $(this);         if (sel.val() != "") { sel.children("option[value='']:first").remove(); }     }); } Automatically add a Required Label and Stars to a Form   It's pretty standard to have a little * next to required form field elements. This function adds the text * Required to the top of the first form on the page, and adds *s to any element within the form with the class editor-label and a data-val-required attribute: function addRequiredFieldLabels() {     var elements = $(".editor-label[data-val-required='']");     if (!elements.length) { return; }     var requiredString = "<div class='editor-required-key'>* Required</div>";     var prependString = "<span class='editor-required-label'> * </span>"; var firstFormOnThePage = $("form:first");     if (!firstFormOnThePage.children('div.editor-required-key').length) {         firstFormOnThePage.prepend(requiredString);     }     elements.each(function (index, value) { var formElement = $(this);         if (!formElement.children('span.editor-required-label').length) {             formElement.prepend(prependString);         }     }); } I hope those come in handy :)

    Read the article

  • Recent update killed unity 3d launcher

    - by Steve
    I am scratching my head on this one, a lot of things are still new to me. I updated 126 packages just now through the update manager, and upon reboot everything works fine except the unity launcher. It's just a dark space. The dash still works, as does the top panel and docky. When I try: unity --replace I end up with this and then an indefinite hang: (compiz:3689): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed WARN 2012-09-23 02:18:29 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubiquity-gtkui.desktop' WARN 2012-09-23 02:18:30 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubuntuone-installer.desktop' ERROR 2012-09-23 02:18:30 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-writer.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-calc.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-impress.desktop' is using a deprecated format for its actions that will be dropped soon. Setting Update "main_menu_key" Setting Update "run_key" Unfortunately I cannot make heads or tails of this. Anyone, please help?

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • 'Invalid or corrupt kernel image' error while booting from USB stick

    - by steve
    I got the Ubuntu 11.10 32-bit edition and i used Unetbootin to write the ISO to a usb stick.After that i tried to boot from the stick,having changed the BIOS boot settings,and when the first interface shows up with the choices of istallation,whatever choice i select i got the message"invalid or corrupt kernel image".I use a netbook with windows 7 on it.I tried different usb sticks but same again.I also tried Universal USB installer instead of Unetbootin but same again.Any idea of what happens?I am using ASUS 1000H.The USB doesn't work in one more PC i tried to,but i also tried a second USB to both of the two computers and same again. I downloaded the ISO image once again and follow the same procedure but same again

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • What Is The Relationship Between Software Architect and Team Member

    - by Steve Peng
    I work for a small company which has less than 100 persons. Several months ago, this company offered me position of SA and I accepted. There are three teams in this company, and I work for one of them. This is the first time I work as a SA. During the past months, I find I don't have any power of management, I even can't let the team member do things (coding-related) in the way which is correct and more efficient. The team members argue with me on very very basic technical questions and I have to explain to them again and again. Though some members did take my advice, other members stubbornly program in their way which frequently proved wrong finally. Recently I feel a little tired and confused. I wonder what is correct relationship between a Software Architect and team members including the team leader? Besides, is software architect also leaded by the Team Leader?

    Read the article

  • How to keep a generic process unique?

    - by Steve Van Opstal
    I'm currently working on a project that makes connection between different banks which send us information on which that project replies. A part of that project configures the different protocols that are used (not every bank uses the same protocol), this runs on a separate server. These processes all have unique id's which are stored in a database. But to save time and money on configurations and new processes, we want to make a generic protocol that banks can use. Because of PCI requirements we have to make a separate process for every bank we connect to. But the generic process has only 1 unique identifier and therefor we cannot keep them apart. Giving every copy of that process a different identifier is as I see it impossible because they run entirely separate. So how do I keep my generic process unique?

    Read the article

  • OS8- AK8- The bad news...

    - by Steve Tunstall
    Ok I told you I would give you the bad news of AK8 to go along with all the cool new stuff, so here it is. It's not that bad, really, just things you need to be aware of. First, the 2013.1 code is being called OS8, AK8 and 2013.1 by different people. I mean different people INSIDE Oracle!! It was supposed to be easy, but it never is. So for the rest of this blog entry, I'm calling it AK8. AK8 is not compatible with the 7x10 series. Ever. The 7x10 series is not supported with AK8, and if you try to upgrade one, it will fail at the healthcheck. All 7x20 series, all of them regardless of age, are supported with AK8. Drive trays. Let's talk about drive trays and SAS cards. The older drive trays for the 7x20 series were called the "Riverwalk 2" or "DS2" trays. They were technically the "J4410" series JBODs that Sun used to sell a la carte before we stopped selling JBODs. Don't get me started on that, it still makes me mad. We used these for many years, and you can still buy them right now until December 15th, 2013, when they will no longer be sold. The DS2 tray only came as a 4u, 24 drive shelf. It held 3.5" drives, and you had a choice of 2TB, 3TB, 300GB or 600GB drives. The SAS HBA in the 7x20 series was called a "Thebe" card, with a part # of 7105394. The 7420, for example, came standard with two of these "Thebe" cards for connecting to the disk trays. Two Thebe cards could handle up to 12 trays, so one would add two more cards to go to 24 trays, or have up to six Thebe cards to handle 36 trays. This card was for external SAS only. It did not connect to the internal OS drives or the Readzillas, both of which used the internal SCSI controller of the server. These Riverwalk 2 trays ARE supported with AK8. You can upgrade your older 7420 or 7320, no problem, as-is. The much older Riverwalk 1 trays or J4400 trays are NOT supported by AK8. However, they were only used by the 7x10 series, and we already said that the 7x10 series was not supported. Here's where it gets tricky. Since last January, we have been selling the new style disk trays. We call them the "DE2-24P" and the "DE2-24C" trays. The "C" tray is for capacity drives, which are 3.5" 3TB or 4TB drives. The "P" trays are for performance drives, which are 2.5" 300GB and 900GB drives. These trays are NOT Riverwalk 2 trays, even though the "C" series may kind of look like it. Different manufacturer and different firmware. They are not new. Like I said, we've been selling them with the 7x20 series since last January. They are the only disk trays we will be selling going forward. Of course, AK8 supports them. So what's the problem? The problem is going to be for people who have to mix drive trays. Remember, your older 7x20 series has Thebe SAS2 HBAs. These have 2 SAS ports per card.  The new ZS3-2 and ZS3-4 systems, however, have the new "Thebe2" SAS2 HBAs. These Thebe2 cards have 4 ports per card. This is very cool, as we can now do more SAS channels with less cards. Instead of needing 4 SAS cards to grow to 24 trays like we did with the old Thebe cards, I can now do 24 trays with only 2 Thebe2 cards. This means more IO slots for fun things like Infiniband and 10G. So far, so good, right? These Thebe2 cards work with any disk tray. You can even mix older DS2 trays with the newer DE2 trays in the same system, as long as you have Thebe2 cards. Ah, there's your problem. You don't have Thebe2 cards in your old 7420, do you? Well, I told you the bad news wasn't that bad, right? We can take out your Thebe cards and replace them with Thebe2. You can then plug your older DS2 trays right back in, and also now get newer DE2 trays going forward. However, it's important that the trays are on different SAS channels. You can mix them in the same system, but not on the same channel. Ask your local SC if you need help with the new cable layout. By the way, the new ZS3-2 and ZS3-4 systems also include a new IO card called "Erie" cards. These are for INTERNAL SAS to the OS drives and the Readzillas. So those are now SAS2 instead of SATA like the older models. Yes, the Erie card uses an IO slot, but that's OK, because the Thebe2 cards allow us to use less SAS HBAs to grow the system, right? That's it. Not too much bad news and really not that bad. AK8 does not support the 7x10 series, and you may need new Thebe2 cards in your older systems if you want to add on newer DE2 trays. I think we can all agree that there are worse things out there. Like our Congress.   Next up.... More good news and cool AK8 tricks. Such as virtual NICS. 

    Read the article

  • Are you ready to take a walk in the clouds?

    - by Steve Loethen
    Cloud computing is here, whether we want it or not.  When I say "a walk in the clouds” I am not talking about a pleasant romantic comedy, but a real alternative to hosting applications on-premise.  For years we have had the power to host our web sites on remote systems.  Sure, challenges existed.  Mostly web sites.  I could, with a few clicks, create a account at a myriad of web host sites, put my site in the hands of a remote hosting company, and boom, I was a site on the internet.  But choices, power, and management was limited. Now, we have a set of services to let us approach and power and control we love, but with scalability of the data center.  My personal web site is hosted on a laptop running hyperV in my basement.  I have to manage the machine, patch it, make sure it is powered up.  This is fine for the “hello, this is my dog skippy site” that I maintain. If the football pool I run has an issue, one of the 10 users I have calls or emails me and I go check it out.  All is well. But this falls well below the needs of even the simplest of enterprises.  A business needs a stronger datacenter, a better pipe to the world.  Do I really want to base my business on a dynamic dns and a dsl line from the local phone company? Cloud computing gives us most of what I value (control, a db of my own, updating my site from Visual Studio). Come learn how this technology can transform your business.  If you are a Microsoft shop, or are interested in Microsoft in the cloud, on April 8 and 9, a 2 day free Azure training class is being conducted in Kansas City.  http://www.azurebootcamp.com/city/kansascity Hope to see you there.  If you come, make sure you look me up.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >