Search Results

Search found 13228 results on 530 pages for 'covering index'.

Page 218/530 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • Virtual Trade Show Available On Demand

    - by Theresa Hickman
    If you missed the Oracle Applications Virtual Trade Show on Feb. 3rd, 2011, you can still view all the recordings now and for the next three months. There are 36 sessions at 30 minutes each, covering 5 tracks, such as Oracle E-Business Suite, PeopleSoft, JD Edwards, Fusion, and Hyperion. Multiple product areas are covered from Financials, Procurement, Supply Chain, CRM, Performance Management, etc. The following lists the Financials sessions for the various product lines. Planning Your Successful Upgrade to Oracle E-Business Suite Financials 12.1. In this session, Bryant and Stratton College talk about their upgrade. Planning Your Successful Upgrade to PeopleSoft Financials 9.1. In this session, the University of Central Florida share their upgrade story. Fusion Financials: The New Standard for Finance. In this session, Terrance Wampler, the VP of Financial Application Strategy discusses the business value of Oracle's next generation financial applications and how customers can take advantage of Fusion Financials alongside their existing investments. Click here, to register and view any session recording at your convenience!

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Why do non-working websites look like search indexes?

    - by Asaf
    An example of this could be shown on every .tk domain that doesn't exist, just write www.givemeacookie.tk or something, you get into a fake search engine/index Now these things are all over, I use to always ignore it because it was obvious just a fake template, but it's everywhere. I was wondering, where is it originated (here it's searchmagnified, but in their website it's also something generic) ? Is there a proper name for these type of websites?

    Read the article

  • Why do some user agents have spam urls in them?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • Oracle Solaris at OpenWorld Tokyo 2012

    - by Markus Weber
    Oracle OpenWorld Tokyo will open its doors on Wednesday, April 4 2012, until Friday, April 6 2012, in Roppongi.I've you been in Tokyo as a Gaijin, or foreigner, you know exactly where that it. Many of Oracle's top executives will be there, including Larry Ellison, Mark Hurd, and John Fowler. The keynotes that they are covering will be very interesting, for sure. Now, whether you will actually be there, or not, you might still find it interesting that several great Solaris-related sessions will be held there, especially as part of the "Oracle Develop" track, such as: "Oracle Solaris 11 - Developers Need To Know" "How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris" "Oracle Solaris Tuning Contest" "IT Assets preservation and constructive migration with Oracle Solaris virtualization" And of course John Fowler's keynote "Server and Storage Systems Strategy".The complete schedule in English can be found here. We hope you can make it. If not, there will always be the San Francisco one.

    Read the article

  • Java developer webcasts for customers and partners

    - by Jürgen Kress
      Accelerate Your Development with Oracle WebLogic Suite Many organisations are reducing travel, conference, and training budgets for their developers without any change to the results expected of those developers. So how can you keep up with the latest developments? By receiving training, delivered free of charge, at your desk! Join us during February and March for a series of online events designed and run by the development team at Oracle. Learn how Oracle WebLogic Suite enables a whole new level of productivity for enterprise developers. Virtual Developer Day – 10th February Starting with our Virtual Developer Day on 10th February, join us for a blend of hands-on labs, live chat and presentations covering the latest on WebLogic, Java EE 6 and the programming tenets that have made it a true platform breakthrough. Weekly WebLogic Webcasts from 17th February to 17th March Afterwards, join us every week from 17th February to 17th March for our weekly one-hour webcasts where we will show you how to build an application from the ground up using Java and JEE technologies. Presented by the engineering team for WebLogic, these webcasts will be of great value to developers and architects, not just those already using WebLogic. For registration, full session abstracts and schedule please click here. Don’t miss out! Register now to join our virtual events and keep up with all the latest developments. Find out more and register now For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: WebLogic,Java,Oracle,OTN,OPN,Java EE6,Jürgen Kress

    Read the article

  • How to Configure Name Servers using Webmin in Unmanaged VPS on Centos

    - by John
    I want to configure my site's name servers and all related stuff. I'm not able to find any good documention steps to do it straight-forwardly without understanding the nitty-gritty of this. I wish I could afford managed Vps I feel that I'm the odd one out looking for this documentation. I've followed doc at these places: http://www.webtop.com.au/blog/how-to-setup-dns-using-webmin-2009052848 , http://www.beer.org.uk/bsacdns and https://www.virtacoresupport.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=134

    Read the article

  • Subdomains on WampServer

    - by MohamedKadri
    Hello, I'm working on WampServer for development, I've set up the domain tuniguide.local and it works fine with this configuration: DocumentRoot "D:\www\tuniguide" ServerName tuniguide.local But when I wanted to add a subdomain fr.tuniguide.local I get a 404 Not Found with this configuration: DocumentRoot "D:\www\tuniguide\fr" ServerName fr.tuniguide.local It gives me this message: The requested URL /www/tuniguide/index.php was not found on this server. Is there someting that I missed? Thanks.

    Read the article

  • adapting a Unity gravitational script to allow moons

    - by PartyMix
    I'm using this script: http://wiki.unity3d.com/index.php/Simple_planetary_orbits to get a solar system going in Unity, but it doesn't seem to support creating bodies that orbit other moving bodies (or I am using it incorrectly). Any idea about how to modify it so that it does (or just use it correctly)? I've been beating my head against this problem for a couple hours, and I really don't feel like I have any idea what I'm doing. Thanks in advance.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • OWB 11gR2 &ndash; JDBC Helper Utility

    - by David Allan
    One of the common queries when importing the tables via JDBC with 11gR2 is determining why the import wizard doesn’t display the tables that you think it should. I often just use the script below to dump out the schemas, tables and columns that the JDBC driver is returning. This is useful in a few areas; to figure out what the schema name is returned to double check with the schema name you have used in the location (this is used in the DatabaseMetaData.getTables API call within the basic JDBC metadata import. to figure out the data types returned from the JDBC driver when you see columns skipped because of no datatype supported messages. also…I can do it via scripting and don’t need to recompile classes and stuff :-) Edit the tcl script and set the JDBC driver, the connection URL and the username and password (they are at the bottom of the script), the script then calls a basic tcl procedure which writes to standard out the schemas, tables and columns with various properties. For example I executed it using the XML JDBC driver from ODI over a simple customers XML file and it writes the following metadata; You can add more details as you need and execute from the OMBPlus panel within OWB. Download the sample tcl jdbc script here There is a bunch of really useful stuff on OTN documenting this area (start with the white paper here) that is worth checking out all related to the OWB SDK covering everything from platform definitions, custom metadata importers, application adapters, code templates etc. You can find a bunch of goodies on the OWB SDK here.

    Read the article

  • Where can I download list of all .com domains registered in the world

    - by John
    I just need registered .com domain names. I know this list is available at: http://www.verisigninc.com/en_US/products-and-services/domain-name-services/grow-your-domain-name-business/tld-zone-access/index.xhtml ( looks like it could take 4 weeks for approval) http://www.premiumdrops.com/zones.html Also I can extract domain names using domain search API at domaintools.com Is there any other source where I can find this list?

    Read the article

  • Register now for the UK Windows Azure Self-paced Interactive Learning Course starting May 10th

    - by Eric Nelson
    [Suggested twitter tag #selfpacedazure] We (myself and David Gristwood) have been working in the UK to create a fantastic opportunity to get yourself up to speed on the Windows Azure Platform over a 6 week period starting May 10th – without ever needing to leave the comfort of your home/office.  The course is derived from the internal training Microsoft gives on Azure which is both fun and challenging in equal parts – and we felt was just too good to keep to ourselves! We will be releasing more details nearer the date but hopefully the following is enough to convince you to register and … recommend it to a colleague or three :-) What we have produced is the “Microsoft Azure Self-paced Learning Course”. This is a free, interactive, self-paced, technical training course covering the Windows Azure platform – Windows Azure, SQL Azure and the Azure AppFabric. The course takes place over a six week period finishing on June 18th. During the course you will work from your own home or workplace, and get involved via interactive Live Meetings session, watch on-line videos, work through hands-on labs and research and complete weekly coursework assignments. The mentors and other attendees on the course will help you in your research and learning, and there are weekly Live Meetings where you can raise questions and interact with them. This is a technical course, aimed at programmers, system designers, and architects who want a solid understanding of the Microsoft Windows Azure platform, hence a prerequisite for this course is at least six months programming in the .NET framework and Visual Studio. Check out the full details of the event or go straight to registration.   The course outline is: Week 1 - Windows Azure Platform Week 2 - Windows Azure Storage Week 3 - Windows Azure Deep Dive and Codename "Dallas" Week 4 - SQL Azure Week 5 - Windows Azure Platform AppFabric Access Control Week 6 - Windows Azure Platform AppFabric Service Bus If you have any questions about the course and its suitability, please email [email protected].

    Read the article

  • links for 2011-02-04

    - by Bob Rhubart
    Oracle WebCenter Suite - Giving Users a Modern Experience: Webcast Q&A (Oracle Enterprise 2.0 Blog) Kellsey Ruppel share a summary of the viewer Q&A from the recent Oracle WebCenter Suite webcast. (tags: oracle otn enterprise2.0 webcenter) Oracle Fusion Middleware Security: Oracle Access Manager 11g Academy: The Policy Model (Part 1) Brian Eidelman kicks off a series of posts covering Oracle Access Manager. (tags: oracle otn fusionmiddleware security) The Tom Kyte Blog: A short podcast... Oracle senior technical architect Tom Kyte shares information on a series of upcoming live, in-person events in which he will participate. (tags: oracle otn ioug) Oracle and AIIM - Putting Enterprise 2.0 to Work (Oracle Enterprise 2.0 Blog) Brian Dirking shares a recap of the recent online Enterprise 2.o presentation by Andy MacMillan (Oracle) and Doug Miles (AIIM). (tags: oracle otn enterprise2.0) Arun Gupta: WebLogic Developer/Production Web Profile, Full Java EE 6 Platform - Chat Transcript and Slides from OTN Virtual Developer Day Arun Gupta shares chat transcripts and more from the recent OTN Virtual Developer Day focused on WebLogic. . (tags: weblogic java) Andrejus Baranovskis's Blog: How to Install Oracle ECM 11g PS3 - Domain Configuration Hint Concise instructions from Oracle ACE Director Andrejus Baranovski. (tags: oracle otn oracleace enterprise2.0 weblogic) Oracle BI EE 11g & Oracle ADF - Part 1 - Understanding Security Integration Rittman Mead's Venkatakrishnan J explores "how much Oracle ADF or the Oracle Fusion Middleware has influenced most of the features in BI EE 11g." (tags: oracle oracleace businessintelligence obiee) Gone With the Wind: Where Have All the Composites Gone? SOA author Antony Reynolds solves a mystery. (tags: oracle otn soa) Playing with Oracle 11gR2, OEL 5.6 and VirtualBox 4.0.2 (1st Part) "This installation should never be used for Production or Development purposes. This installation was created for educational purpose only, and is extremely helpful to learn and understand how Oracle works if you do not have access to a traditional hardware resource." - Oracle ACE Director Francisco Munoz Alvarez (tags: oracle otn virtualbox virtualization)

    Read the article

  • Trouble upgrading from 12.04 to 14.04.1 lts

    - by Stanley Hlatshwayo
    After running these commands from the command line: sudo apt-get update sudo apt-get dist-upgrade sudo do-release-upgrade -d i get the following error message: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/oneiric/main/source/Sources 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead. PLEASE HELP anyone. How can i fix this bug?

    Read the article

  • Card deck and sparse matrix interview questions

    - by MrDatabase
    I just had a technical phone screen w/ a start-up. Here's the technical questions I was asked ... and my answers. What do think of these answers? Feel free to post better answers :-) Question 1: how would you represent a standard 52 card deck in (basically any language)? How would you shuffle the deck? Answer: use an array containing a "Card" struct or class. Each instance of card has some unique identifier... either it's position in the array or a unique integer member variable in the range [0, 51]. Shuffle the cards by traversing the array once from index zero to index 51. Randomly swap ith card with "another card" (I didn't remember how this shuffle algorithm works exactly). Watch out for using the same probability for each card... that's a gotcha in this algorithm. I mentioned the algorithm is from Programming Pearls. Question 2: how to represent a large sparse matrix? the matrix can be very large... like 1000x1000... but only a relatively small number (~20) of the entries are non-zero. Answer: condense the array into a list of the non-zero entries. for a given entry (i,j) in the array... "map" (i,j) to a single integer k... then use k as a key into a dictionary or hashtable. For the 1000x1000 sparse array map (i,j) to k using something like f(i, j) = i + j * 1001. 1001 is just one plus the maximum of all i and j. I didn't recall exactly how this mapping worked... but the interviewer got the idea (I think). Are these good answers? I'm wondering because after I finished the second question the interviewer said the dreaded "well that's all the questions I have for now." Cheers!

    Read the article

  • drda protocol specs

    - by Alon Rew
    When connecting to a server using the DRDA protocol, is it true that the first Client-To-Server command MUST be EXCSAT chained with ACCSEC? I found 2 different answers when I googled it. If you look at The Open Group web site (https://collaboration.opengroup.org/dbiop/) it can be understood that the answer is NO. However, if you look at the IBM website (http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.ims11.doc.apr%2Fims_ddm_excsat.htm) you can understand the answer is YES. So which is it?

    Read the article

  • Discount Multilingual Day in the Life of User Experience

    - by ultan o'broin
    Super article by the WikiMedia Foundation engineering folks about Designing for the Multilingual Web using the Wikipedia Universal Language Selector user interface as an example. Great ideas about tools that are available, as well as covering the basics of wireframing (mockups), prototyping, and user testing. Lots of inspiration there for developers and builders of apps who want to ensure their user experience (UX) really delivers for a global audience. Check out the use of the Firefox-based Pencil, how to translate your mockups, and how to perform remote user testing using Google+ Hangouts. Paul Giner demonstrates how to translate mockups. A little clunky and homespun in parts (I would prefer if tools such as Pencil or Balsamiq MockUps, and so on, could roundtrip directly from SVG to XLIFF for example, and Pencil doesn't work yet with the latest versions for Firefox) and I am not sure how it can really scales to enterprise-level use. However, the UX methodology is basically sound, and reinforces the importance of designing and testing in more that one language. The most powerful message for me is that you do not need special resources, training or expensive tools to deliver great-looking usable apps if you're a developer. Definitely worth considering if you're building apps out there in the community.

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • Ubuntu 12.10 not updating. Failed to download repository information

    - by vinay
    I recently tried to update by using the update manager, The update stopped and I received the error message: Failed to download repository information W:Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/quantal/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead. What is the problem?

    Read the article

  • How to handle non-existent subdirectories?

    - by Question Overflow
    I have a dynamic website with friendly URLs. Example: Instead of /user.php?id=123, I have /user/123 Instead of /index.php?category=fishes, I have /fishes But, how do I handle non-existent subdirectories such as /about/123? Currently it gives a 200 success instead of a 404 not found error. Is there a way to deal with non-existent subdirectories in Apache config and at the same time allow for friendly URLs? Or do I have to handle this individually for each PHP script?

    Read the article

  • FileStream and FileTable in SQL Server 2012

    SQL Server 2012 enhanced the SQL Server 2008 FileStream data type by introducing FileTable, which lets an application integrate its storage and data management components to allow non-transactional access, and provide integrated SQL Server services. Arshad Ali explains how. Top 5 hard-earned lessons of a DBAIn part one, read about ‘The Case of the Missing Index’ and learn from the experience of The DBA Team. Read now.

    Read the article

  • Improving server security [closed]

    - by Vicenç Gascó
    I've been developing webapps for a while ... and I always had a sysadmin which made the environment perfect to run my apps with no worries. But now I am starting a project on myself, and I need to set up a server, knowing near to nothing about it. All I need to do is just have a Linux, with a webserver (I usually used Apache), PHP and MySQL. I'll also need SSH, SSL to run https:// and FTP to transfer files. I know how to install almost everything (need advice about SSL) with Ubuntu Server, but I am concerned about the security topic ... say: firewall, open/closed ports, php security, etc ... Where can I found a good guide covering this topics? Everything else in the server... I don't need it, and I wanna know how to remove it, to avoid resources consumption. Final note: I'll be running the webapp at amazon-ec2 or rackspace cloud servers. Thanks in advance!!

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >