Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 620/701 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Xsigo and Oracle's Storage

    - by Philippe Deverchère
    Xsigo, a virtual network infrastructure provider, has recently been acquired by Oracle. Following this acquisition, one might ask ourselves why it is important to Oracle and how Oracle's storage is going to benefit on the long term from this virtualized infrastructure layer. Well, the first thing to understand is that Virtual Networking addresses both network and storage connectivity. Oracle Virtual Networking, as the Xsigo technology is now called, connects any server to any network and storage, so this is not just about connecting servers to the Internet or Intranet. It is also for a large part connecting servers to NAS and SAN storage. Connecting servers to storage has become increasingly complex in the past few years because of the strong emergence of virtualization at the Operating System level. 50% of enterprise workloads are now virtualized, up from 18% in 2009, resulting in a strong consolidation of various applications in a high density server footprint. At the same time, server I/O capability increased 8x in the last 8 years. All this has pushed IT administrators to multiply the number of I/O connections in the back-end of their physical servers, resulting in a messy and very hard to manage networking infrastructure. Here is a typical view of a rack back-end when no virtual networking is used. We consider that today: - 75% of users have ten or more Ethernet ports per server - 85% of users have two or more SAN ports per server - 58% have had to add connectivity to a server specifically for VMs - 65% consider cable reduction a priority The average is 12 or more ports per server, resulting in an extremely complex infrastructure to manage. What Oracle wants to achieve with its Oracle Virtual Networking offering is pretty simple. The objective is to eliminate the complexity through a dramatic reduction of cabling between servers and storage/networks. It is also to provide a software based management system so that any server can be connected to any network or any storage, on demand, and without physical intervention on the infrastructure. At the end of the day, the picture on the left shows what one wants to get for the back-end of customer's racks: just a couple of connections on each physical server to provide a simple, agile and fast network infrastructure for both storage and networking access. This is exactly what the Oracle Virtual Networking solution does. It transforms a complex, error-prone, difficult to manage and expensive networking infrastructure into a simple, high performance and agile solution for the data center. Practically speaking, and for the sake of simplicity, imagine that each server just hosts a minimal number of physical InfiniBand HCAs (Host Channel Adapter) with two links (for redundancy) onto the Oracle Fabric Interconnect director. Using the Oracle Fabric Manager software, you'll then be able to create virtual NICs and HBAs (called vNIC and vHBA) that will be seen by the servers as standard NICs and HBAs and associate them to networks and storage systems which are physically connected to the back-end of the director through standard Fibre Channel and Ethernet GbE/10GbE ports. In addition to this incredibly simple "at-a-click" connectivity capability, the Oracle Virtual Networking solution offers powerful features such as network isolation, Quality of Service, advanced performance monitoring and non-disruptive reconfiguration, migration and scalability of networking infrastructure. So let's go back now to our initial question: why is Oracle Virtual Networking especially important to Oracle's storage solutions? After all, one could connect any storage in the back-end of the Oracle Fabric Interconnect directors, right? The answer is pretty simple: since Oracle owns both the virtualized networking infrastructure and the storage (ZFS-SA, Pillar Axiom and tape), it is possible to imagine several ways in the future to add value when it comes to connect storage to a virtualized storage network: enhanced storage capabilities, converged management between storage and network, improved diagnostic capabilities and optimized integration resulting in higher performance and unique features/functions. Of course, all this is not going to be done overnight, and future will tell us is which evolutions come first. But there is little doubt that the integration of Xsigo within Oracle is going to create opportunities for Oracle's storage!

    Read the article

  • Who owns the IP rights of the software without written employment contract? Employer or employee? [closed]

    - by P T
    I am a software engineer who got an idea, and developed alone an integrated ERP software solution over the past 2 years. I got the idea and coded much of the software in my personal time, utilizing my own resources, but also as intern/employee at small wholesale retailer (company A). I had a verbal agreement with the company that I could keep the IP rights to the code and the company would have the "shop rights" to use "a copy" of the software without restrictions. Part of this agreement was that I was heavily underpaid to keep the rights. Recently things started to take a down turn in the company A as the company grew fairly large and new head management was formed, also new partners were brought in. The original owners distanced themselves from the business, and the new "greedy" group indicated that they want to claim the IP rights to my software, offering me a contract that would split the IP ownership into 50% co-ownership, completely disregarding the initial verbal agreements. As of now there was no single written job description and agreement/contract/policy that I signed with the company A, I signed only I-9 and W-4 forms. I now have an opportunity to leave the company A and form a new business with 2 partners (Company B), obviously using the software as the primary tool. There would be no direct conflict of interest as the company A sells wholesale goods. My core question is: "Who owns the code without contract? Me or the company A? (in FL, US)" Detailed questions: I am familiar with the "shop rights", I don't have any problem leaving a copy of the code in the company for them to use/enhance to run their wholesale business. What worries me, Can the company A make any legal claims to the software/code/IP and potential derived profits/interests after I leave and form a company B? Can applying for a copyright of the code at http://www.copyright.gov in my name prevent any legal disputes in the future? Can I use it as evidence for legal defense? Could adding a note specifying the company A as exclusive license holder clarify the arrangements? If I leave and the company A sues me, what evidence would they use against me? On what basis would the sue since their business is in completely different industry than software (wholesale goods). Every single source file was created/stored on my personal computer with proper documentation including a copyright notice with my credentials (name/email/addres/phone). It's also worth noting that I develop significant part of the software prior to my involvement with the company A as student. If I am forced to sign a contract and the company A doesn't honor the verbal agreement, making claims towards the ownership, what can I do settle the matter legally? I like to avoid legal process altogether as my budget for court battles is extremely limited at the moment. Would altering the code beyond recognition and using it for the company B prevent the company A make any copyright claims? My common sense tells me that what I developed is by default mine in terms of IP, unless there is a signed legal agreement stating otherwise. But looking online it may be completely backwards, this really worries me. I understand that this is not legal advice, and I know to get the ultimate answer I need to hire a lawyer. I am only hoping to get some valuable input/experience/advice/opinion from those who were in similar situation or are familiar with the topic. Thank you, PT

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • Sharing configuration settings between Windows Azure roles

    - by theo.spears
    If you are working on a medium-large Windows Azure project it's likely it will involve more than one role, for example separate web and worker roles. Unfortunately although all the windows azure configuration settings are stored in a single cscfg file, there is no way to share configuration settings between multiple roles. This means you have to duplicate common settings like connection strings across all your roles. There is an open Connect issue about this topic, but Microsoft have not said when they will fix it. In the mean time I've put together a dirty dirty hack cunning workaround that creates a fake role containing your shared configuration settings, and copies it to all roles as part of the build process. Here's how you set it up: 1. Download the zip file attached to this post, and unzip it into the folder containing your Azure project (not your solution folder). 2. Edit your csdef and cscfg files to include the placeholder project ServiceDefinition.csdef<?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="AzureSpendNotifier" http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition%22"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" /> </ConfigurationSettings> </WorkerRole> <WorkerRole name="MyWorker"> <ConfigurationSettings> </ConfigurationSettings> </WorkerRole> <WebRole name="MyWeb"> <Sites> <Site name="Web"> <Bindings> <Binding name="WebEndpoint" endpointName="WebEndpoint" /> </Bindings> </Site> </Sites> <ConfigurationSettings> </ConfigurationSettings> </WebRole> </ServiceDefinition> ServiceConfiguration.cscfg<?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="AzureSpendNotifier" xmlns=http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration osFamily="1" osVersion="*"> <Role name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" value="Hello World" /> </ConfigurationSettings> <Instances count="1" /> </Role> <Role name="MyWorker"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> <Role name="MyWeb"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> </ServiceConfiguration> It is important that all your roles contain a ConfigurationSettings entry in both cscfg and csdef files, even if it's empty- otherwise the shared configuration settings will not be inserted. 3. Open your azure deployment (.ccproj) project in notepad, and add the highlighted line below: ... <Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" /> <Import Project="globalsettings/globalsettings.targets" /> </Project> It is important you add this below the Microsoft.CloudService.targets import line, as it replaces some of the rules defined in that file. Visual studio will prompt you to reload the project, say yes. At this point you will have a new Azure role called 'GLOBAL' with settings you can edit through the visual studio properties panel as normal. This role will never be deployed, but any settings you add to it will be copied to all your other roles when deployed or tested locally within visual studio.

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • Partner Blog: Hub City Media Introduces iPad Application for Oracle Identity Analytics

    - by Tanu Sood
    About the Writer:Steve Giovannetti is CTO of Hub City Media, Inc., a company that specializes in implementation and product development on the Oracle Identity Management platform. Recently, Hub City Media announced the introduction of iPad application IdentityCert for Oracle Identity Analytics. This post explore the business use cases and application of IdentityCert.Hub City Media(HCM) has been deploying certification solutions based on Oracle Identity Analytics since it first appeared on the market as Vaau RBACx. With each deployment we've seen the same pattern repeat time and time again:1. Customers suffering under the weight of manual access certification regimens deploy Oracle Identity Analytics (OIA) for automated certification. 2. OIA improves the frequency, speed, accuracy, and participation of certifications across the organization. 3. Then the certifiers, typically managers and supervisors, ask, “Is there any easier way to do these certifications offline?”The current version of OIA has a way to export certification data to a spreadsheet.  For some customers, we've leveraged this feature and combined it with some of our own custom code to provide a solution based on spreadsheet exports and imports.  Customers export the certification to Microsoft Excel, complete it, and then import the spreadsheet to OIA. It worked well for offline certification, but if the user accidentally altered the format of the spreadsheet, the import of the data could fail. We were close to a solution but it wasn’t reliable.Over the past few years, we've seen the proliferation of Apple iOS devices, specifically the iPhone and iPad, in the enterprise.  As our customers were asking for offline certification, we noticed the same population of users traditionally responsible for access certification, were early adopters of the iPad. The environment seemed ideal for us to create an iPad application to support offline certifications using Oracle Identity Analytics. That’s why we created IdentityCert™.IdentityCert allows users to view their analytics dashboard, complete user certifications, and resolve policy violations with OIA, from their iPads.The current IdentityCert analytics dashboard displays the same charts that are available in the Oracle Identity Analytics product. However, we plan to expand the number of available analytics in future releases.The main function of IdentityCert is user certification which can be performed quickly and efficiently using a simple touch interface. Managers tap into a certification, use simple gestures to claim users and certify their access.  Certifications can be securely downloaded to IdentityCert and can be completed with or without a network connection. The user can upload the completed certifications once they are connected to a cellular or wi-fi network.Oracle Identity Analytics can generate policy violation notifications based on detective scans of identity warehouse or via preventative analysis of identity access requests. IdentityCert allows users to view all policy violations, resolve, or delegate them to appropriate users. IdentityCert also analyzes the policy violation expression and produces more human friendly descriptions of the policy violation which improves the ability of users to resolve the violation. IdentityCert can be deployed quickly into a customer's environment. It is deployed with Hub City Media's ID Services to connect Oracle Identity Analytics securely with the iPad application.Oracle Identity Management 11g R2 is an important evolutionary release. Oracle's Identity Management suite has more characteristics of a cohesive platform. This platform provides an integrated set of identity services that can be used to protect, manage, and audit security within the enterprise. At HCM we take the platform concept a step further and see it as an opportunity to create unique solutions for Oracle Identity Management customers. IdentityCert is our commitment to this platform. You can download IdentityCert from the Apple iOS App Store today. It includes a demo dataset that you can use to explore the functions of the product without any server infrastructure. Download it. Give it a try. We would appreciate your interest and welcome any feedback.Resources:Press Release: Hub City Media Introduces iPad Application IdentityCert™ for Oracle Identity AnalyticsApp Store Download: http://bit.ly/IdentityCertOracle Identity Governance Suite

    Read the article

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • Webcast Q&A: Demystifying External Authorization

    - by B Shashikumar
    Thanks to everyone who joined us on our webcast with SANS Institute on "Demystifying External Authorization". Also a special thanks to Tanya Baccam from SANS for sharing her experiences reviewing Oracle Entitlements Server. If you missed the webcast, you can catch a replay of the webcast here.  Here is a compilation of the slides that were used on today's webcast.  SANS Institute Product Review: Oracle Entitlements Server We have captured the Q&A from the webcast for those who couldn't attend. Q: Is Oracle ADF integrated with Oracle Entitlements Server (OES) ? A:  In Oracle Fusion Middleware 11g and later, Oracle ADF, Oracle WebCenter, Oracle SOA Suite and other middleware products are all built on Oracle Platform Security Services (OPSS). OPSS privodes many security functions like authentication, audit, credential stores, token validaiton, etc. OES is the authorization solution underlying OPSS. And OES 11g unifies different authorization mechanisms including Java2/ABAC/RBAC.  Q: Which portal frameworks support the use of OES policies for portal entitlement decisions? A:  Many portals including Oracle WebCenter 11g  run natively on top of OES. The authorization engine in WebCenter is OES. Besides, OES offers out of the box integration with Microsoft SharePoint. So SharePoint sites, sub sites, web parts, navigation items, document access control can all be secured with OES. Several other portals have also been secured with OES ex: IBM websphere portal Q:  How do we enforce Seperation of Duties (SoD) rules using OES (also how does that integrate with a product like OIA) ? A:  A product like OIM or OIA can be used to set up and govern SoD policies. OES enforces these policies at run time. Role mapping policies in OES can assign roles dynamically to users under certain conditions. So this makes it simple to enforce SoD policies inside an application at runtime. Q:  Our web application has objects like buttons, text fields, drop down lists etc. is there any ”autodiscovery” capability that allows me to use/see those web page objects so you can start building policies over those objects? or how does it work? A:  There ae few different options with OES. When you build an app, and make authorization calls with the app in the test environment, you can put OES in discovery mode and have OES register those authorization calls and decisions. Instead of doing  this after the fact, an application like Oracle iFlex has built-in UI controls where when the app is running, a script can intercept authorization calls and migrate those over to OES. And in Oracle ADF, a lot of resources are protected so pages, task flows and other resources be registered without OES knowing about them. Q: Does current Oracle Fusion application use OES ? The documentation does not seem to indicate it. A:  The current version of Fusion Apps is using a preview version of OES. Soon it will be repalced with OES 11g.  Q: Can OES secure mobile apps? A: Absolutely. Nowadays users are bringing their own devices such as a a smartphone or tablet to work. With the Oracle IDM platform, we can tie identity context into the access management stack. With OES we can make use of context to enforce authorization for users accessing apps from mobile devices. For example: we can take into account different elements like authentication scheme, location, device type etc and tie all that information into an authorization decision.  Q:  Does Oracle Entitlements Server (OES) have an ESAPI implementation? A:  OES is an authorization solution. ESAPI/OWASP is something we include in our platform security solution for all oracle products, not specifically in OES Q:  ESAPI has an authorization API. Can I use that API to access OES? A:  If the API supports an interface / sspi model that can be configured to invoke an external authz system through some mechanism then yes

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Key-Value Pair Databases and Document Databases in the Big Data Story. In this article we will understand the role of Columnar, Graph and Spatial Database supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (The day before yesterday’s post) NoSQL Databases (The day before yesterday’s post) Key-Value Pair Databases (Yesterday’s post) Document Databases (Yesterday’s post) Columnar Databases (Tomorrow’s post) Graph Databases (Today’s post) Spatial Databases (Today’s post) Columnar Databases  Relational Database is a row store database or a row oriented database. Columnar databases are column oriented or column store databases. As we discussed earlier in Big Data we have different kinds of data and we need to store different kinds of data in the database. When we have columnar database it is very easy to do so as we can just add a new column to the columnar database. HBase is one of the most popular columnar databases. It uses Hadoop file system and MapReduce for its core data storage. However, remember this is not a good solution for every application. This is particularly good for the database where there is high volume incremental data is gathered and processed. Graph Databases For a highly interconnected data it is suitable to use Graph Database. This database has node relationship structure. Nodes and relationships contain a Key Value Pair where data is stored. The major advantage of this database is that it supports faster navigation among various relationships. For example, Facebook uses a graph database to list and demonstrate various relationships between users. Neo4J is one of the most popular open source graph database. One of the major dis-advantage of the Graph Database is that it is not possible to self-reference (self joins in the RDBMS terms) and there might be real world scenarios where this might be required and graph database does not support it. Spatial Databases  We all use Foursquare, Google+ as well Facebook Check-ins for location aware check-ins. All the location aware applications figure out the position of the phone with the help of Global Positioning System (GPS). Think about it, so many different users at different location in the world and checking-in all together. Additionally, the applications now feature reach and users are demanding more and more information from them, for example like movies, coffee shop or places see. They are all running with the help of Spatial Databases. Spatial data are standardize by the Open Geospatial Consortium known as OGC. Spatial data helps answering many interesting questions like “Distance between two locations, area of interesting places etc.” When we think of it, it is very clear that handing spatial data and returning meaningful result is one big task when there are millions of users moving dynamically from one place to another place & requesting various spatial information. PostGIS/OpenGIS suite is very popular spatial database. It runs as a layer implementation on the RDBMS PostgreSQL. This makes it totally unique as it offers best from both the worlds. Courtesy: mushroom network Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Hive. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • From DBA to Data Analyst

    - by Denise McInerney
    Cross posted from the PASS Blog There is a lot changing in the data professional’s world these days. More data is being produced and stored. More enterprises are trying to use that data to improve their products and services and understand their customers better. More data platforms and tools seem to be crowding the market. For a traditional DBA this can be a confusing and perhaps unsettling time. It’s also a time that offers great opportunity for career growth. I speak from personal experience. We sometimes refer to the “accidental DBA”, the person who finds herself suddenly responsible for managing the database because she has some other technical skills. While it was not accidental, six months ago I was unexpectedly offered a chance to transition out of my DBA role and become a data analyst. I have since come to view this offer as a gift, though at the time I wasn’t quite sure what to do with it. Throughout my DBA career I’ve gotten support from my PASS friends and colleagues and they were the first ones I turned to for counsel about this new situation. Everyone was encouraging and I received two pieces of valuable advice: first, leverage what I already know about data and second, work to understand the business’ needs. Bringing the power of data to bear to solve business problems is really the heart of the job. The challenge is figuring out how to do that. PASS had been the source of much of my technical training as a DBA, so I naturally started there to begin my Business Intelligence education. Once again the Virtual Chapter webinars, local chapter meetings and SQL Saturdays have been invaluable. I work in a large company where we are fortunate to have some very talented data scientists and analysts. These colleagues have been generous with their time and advice. I also took a statistics class through Coursera where I got a refresher in statistics and an introduction to the R programming language. And that’s not the end of the free resources available to someone wanting to acquire new skills. There are many knowledgeable Business Intelligence and Analytics professionals who teach through their blogs. Every day I can learn something new from one of these experts. Sometimes we plan our next career move and sometimes it just happens. Either way a database professional who follows industry developments and acquires new skills will be better prepared when change comes. Take the opportunity to learn something about the changing data landscape and attend a Business Intelligence, Business Analytics or Big Data Virtual Chapter meeting. And if you are moving into this new world of data consider attending the PASS Business Analytics Conference in April where you can meet and learn from those who are already on that road. It’s been said that “the only thing constant is change.” That’s never been more true for the data professional than it is today. But if you are someone who loves data and grasps its potential you are in the right place at the right time.

    Read the article

  • Thoughts on my new template language/HTML generator?

    - by Ralph
    I guess I should have pre-faced this with: Yes, I know there is no need for a new templating language, but I want to make a new one anyway, because I'm a fool. That aside, how can I improve my language: Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). You will also be able to include partial templates like: for(@item in @item_list) include("item_partial", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name "item" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? This would make it more clear that the "tag" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. Do you like the attribute syntax? (round brackets) How should I do template inheritance/layouts? In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special $content_for_layout variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take Filtered variables inside quotes: "xxx {@var|filter} yyy" "xxx @{var|filter} yyy" "xxx @var|filter yyy" i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like @var|filter:arg="x", then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Move a sphere along the swipe?

    - by gameOne
    I am trying to get a sphere curl based on the swipe. I know this has been asked many times, but still it's yearning to be answered. I have managed to add force on the direction of the swipe and it works near perfect. I also have all the swipe positions stored in a list. Now I would like to know how can the curl be achieved. I believe the the curve in the swipe can be calculated by the Vector dot product If theta is 0, then there is no need to add the swipe. If it is not, then add the curl. Maybe this condition is redundant if I managed to find how to curl the sphere along the swipe position The code that adds the force to sphere based on the swipe direction is as below: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SwipeControl : MonoBehaviour { //First establish some variables private Vector3 fp; //First finger position private Vector3 lp; //Last finger position private Vector3 ip; //some intermediate finger position private float dragDistance; //Distance needed for a swipe to register public float power; private Vector3 footballPos; private bool canShoot = true; private float factor = 40f; private List<Vector3> touchPositions = new List<Vector3>(); void Start(){ dragDistance = Screen.height*20/100; Physics.gravity = new Vector3(0, -20, 0); footballPos = transform.position; } // Update is called once per frame void Update() { //Examine the touch inputs foreach (Touch touch in Input.touches) { /*if (touch.phase == TouchPhase.Began) { fp = touch.position; lp = touch.position; }*/ if (touch.phase == TouchPhase.Moved) { touchPositions.Add(touch.position); } if (touch.phase == TouchPhase.Ended) { fp = touchPositions[0]; lp = touchPositions[touchPositions.Count-1]; ip = touchPositions[touchPositions.Count/2]; //First check if it's actually a drag if (Mathf.Abs(lp.x - fp.x) > dragDistance || Mathf.Abs(lp.y - fp.y) > dragDistance) { //It's a drag //Now check what direction the drag was //First check which axis if (Mathf.Abs(lp.x - fp.x) > Mathf.Abs(lp.y - fp.y)) { //If the horizontal movement is greater than the vertical movement... if ((lp.x>fp.x) && canShoot) //If the movement was to the right) { //Right move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("right "+(lp.x-fp.x));//MOVE RIGHT CODE HERE canShoot = false; //rigidbody.AddForce((new Vector3((lp.x-fp.x)/30,10,16))*power); StartCoroutine(ReturnBall()); } else { //Left move float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,10,16))*power); Debug.Log("left "+(lp.x-fp.x));//MOVE LEFT CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } } else { //the vertical movement is greater than the horizontal movement if (lp.y>fp.y) //If the movement was up { //Up move float y = (lp.y-fp.y)/Screen.height*factor; float x = (lp.x - fp.x) / Screen.height * factor; rigidbody.AddForce((new Vector3(x,y,16))*power); Debug.Log("up "+(lp.x-fp.x));//MOVE UP CODE HERE canShoot = false; //rigidbody.AddForce(new Vector3((lp.x-fp.x)/30,10,16)*power); StartCoroutine(ReturnBall()); } else { //Down move Debug.Log("down "+lp+" "+fp);//MOVE DOWN CODE HERE } } } else { //It's a tap Debug.Log("none");//TAP CODE HERE } } } } IEnumerator ReturnBall() { yield return new WaitForSeconds(5.0f); rigidbody.velocity = Vector3.zero; rigidbody.angularVelocity = Vector3.zero; transform.position = footballPos; canShoot =true; isKicked = false; } }

    Read the article

  • Problem with Assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • how to assign javascript variable value to the google analytics script? [migrated]

    - by Vinoth Prakash
    I have assigned two values in the two hidden variables at server Side and accessed those values at client side using script. I have written the google analytics code. I have set two custom variable. I need to pass two values which is stored in the javascript variables to the "value" of custom variable. I have assigned the varibales but values not displaying. please telll what error i made in the script. My aspx code <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <br /> Total Pirce&nbsp; &nbsp;: <asp:Label ID="Label1" runat="server" Text="10"></asp:Label><br /> &nbsp;Ship Price&nbsp; &nbsp; : <asp:Label ID="Label2" runat="server" Text="5"></asp:Label> <br /> ------------------<br /> Grand Total : <asp:Label ID="Label3" runat="server" Text="15"></asp:Label><br /> ------------------</div> <asp:HiddenField ID="HiddenField1" runat="server" /> <asp:HiddenField ID="HiddenField2" runat="server" /> </form> <script type="text/javascript"> var serverhid1 = document.getElementById('HiddenField1').value; var serverhid2 = document.getElementById('HiddenField2').value; alert(serverhid1) alert(serverhid2) var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-35156990-1']); //Set Custom Variable _gaq.push(['_setCustomVar', 1, 'TotalPirce', serverhid1 , 3]); _gaq.push(['_setCustomVar', 2, 'Shipping','yes', 3]); _gaq.push(['_setCustomVar', 3, 'GrandTotal',check(), 3]); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </body> </html> cs Code protected void Page_Load(object sender, EventArgs e) { HiddenField1.Value = Label1.Text; HiddenField2.Value = Label2.Text; }

    Read the article

  • Custom Templates: Using user exits

    - by Anthony Shorten
    One of the features of Oracle Utilities Application Framework V4.1 is the ability to use templates and user exits to extend the base configuration files. The configuration files used by the product are based upon a set of templates shipped with the product. When the configureEnv utility asks for configuration settings they are stored in a configuration file ENVIRON.INI which outlines the environment settings. These settings are then used by the initialSetup utility to populate the various configuration files used by the product using templates located in the templates directory of the installation. Now, whilst the majority of the installations at any site are non-production and the templates provided are generally adequate for that need, there are circumstances where extension of templates are needed to take advantage of more advanced facilities (such as advanced security and environment settings). The issue then becomes that if you alter the configuration files manually (directly or indirectly) then you may lose all your custom settings the next time you run initialSetup. To counter this we allow customers to either override templates with their own template or we now provide user exits in the templates to add fragments of configuration unique to that part of the configuration file. The latter means that the base template is still used but additions are included to provide the extensions. The provision of custom templates is supported but as soon as you use a custom template you are then responsible for reflecting any changes we put in the base template over time. Not a big task but annoying if you have to do it for multiple copies of the product. I prefer to use user exits as they seem to represent the least effort solution. The way to find the user exits available is to either read the Server Administration Guide that comes with your product or look at individual templates and look for the lines: #ouaf_user_exit <user exit name> Where <user exit name> is the name of the user exit. User exits are not always present but are in places that we feel are the most likely to be changed. If a user exit does not exist the you can always use a custom template instead. Now lets show an example. By default, the product generates a config.xml file to be used with Oracle WebLogic. This configuration file has the basic setting contained in it to manage the product. If you want to take advantage of the Oracle WebLogic advanced settings, you can use the console to make those changes and it will be reflected in the config.xml automatically. To retain those changes across invocations of initialSetup, you need to alter the template that generates the config.xml or use user exits. The technique is this. Make the change in the console and when you save the change, WebLogic will reflect it in the config.xml for you. Compare the old version and new version of the config.xml and determine what to add and then find the user exit to put it in by examining the base template. For example, by default, the console is not automatically deployed (it is deployed on demand) in the base config.xml. To make the console deploy, you can add the following line to the templates/CM_config.xml.win.exit_3.include file (for windows) or templates/CM_config.xml.exit_3.include file (for linux/unix): <internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled> Now run initialSetup to reflect the change and if you check the splapp/config/config.xml file you will see the change applied for you. Now how did I know which include file? I check the template for config.xml and found there was an user exit at the right place. I prefixed my include filename with "CM_" to denote it as a custom user exit. This will tell the upgrade tools to leave that file alone whenever you decide to upgrade (or even apply fixes). User exits can be powerful and allow customizations to be added for advanced configuration. You will see products using Oracle Utilities Application Framework use this exits themselves (usually prefixed with the product code). You are also taking advantage of them.

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >