Search Results

Search found 20573 results on 823 pages for 'pretty big scheme'.

Page 5/823 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • How do I write Push and Pop in Scheme?

    - by kunjaan
    Right now I have (define (push x a-list) (set! a-list (cons a-list x))) (define (pop a-list) (let ((result (first a-list))) (set! a-list (rest a-list)) result)) But I get this result: Welcome to DrScheme, version 4.2 [3m]. Language: Module; memory limit: 256 megabytes. > (define my-list (list 1 2 3)) > (push 4 my-list) > my-list (1 2 3) > (pop my-list) 1 > my-list (1 2 3) What am I doing wrong? Is there a better way to write push so that the element is added at the end and pop so that the element gets deleted from the first?

    Read the article

  • How do you perform arithmetic calculations on symbols in Scheme/Lisp?

    - by kunjaan
    I need to perform calculations with a symbol. I need to convert the time which is of hh:mm form to the minutes passed. ;; (get-minutes symbol)->number ;; convert the time in hh:mm to minutes ;; (get-minutes 6:19)-> 6* 60 + 19 (define (get-minutes time) (let* ((a-time (string->list (symbol->string time))) (hour (first a-time)) (minutes (third a-time))) (+ (* hour 60) minutes))) This is an incorrect code, I get a character after all that conversion and cannot perform a correct calculation. Do you guys have any suggestions? I cant change the input type. Context: The input is a flight schedule so I cannot alter the data structure. ;; ---------------------------------------------------------------------- Edit: Figured out an ugly solution. Please suggest something better. (define (get-minutes time) (let* ((a-time (symbol->string time)) (hour (string->number (substring a-time 0 1))) (minutes (string->number (substring a-time 2 4)))) (+ (* hour 60) minutes)))

    Read the article

  • Pretty-print HTML5

    - by blinry
    Is there a command line program that pretty-prints (that is, indents, adds line breaks to) HTML5 code? tidy does too much for me (heck, it alters my doctype!), vim too little. What do you use to make your HTML5 code look beautiful?

    Read the article

  • Is there a pretty print for PHP?

    - by Aaron Lee
    I'm fixing some PHP scripts and I'm missing ruby's pretty printer. i.e. require 'pp' arr = {:one => 1} pp arr will output {:one = 1}. This even works with fairly complex objects and makes digging into an unknown script much easier. Is there some way to duplicate this functionality in PHP?

    Read the article

  • scheme basic loop

    - by utku
    I'm trying to write a scheme func that behaves in a way similar to a loop. (loop min max func) This loop should perform the func between the range min and max (integers) -- one of an example like this (loop 3 6 (lambda (x) (display (* x x)) (newline))) 9 16 25 36 and I define the function as ( define ( loop min max fn) (cond ((>= max min) ( ( fn min ) ( loop (+ min 1 ) max fn) ) ) ) ) when I run the code I get the result then an error occur. I couldn't handle this error. (loop 3 6 (lambda (x) (display(* x x))(newline))) 9 16 25 36 Backtrace: In standard input: 41: 0* [loop 3 6 #] In utku1.scheme: 9: 1 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 2 [# ... 10: 3* [loop 4 6 #] 9: 4 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 5 [# ... 10: 6* [loop 5 6 #] 9: 7 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 8 [# ... 10: 9* [loop 6 6 #] 9: 10 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 11 [# #] utku1.scheme:10:31: In expression ((fn min) (loop # max ...)): utku1.scheme:10:31: Wrong type to apply: #<unspecified> ABORT: (misc-error)

    Read the article

  • BIG IP - HTTPS Health Monitor setup

    - by djo
    I have a Web site that we have setup a health monitoring pages so we can take our servers in and out of the Big-IP as we see fit. Now we have just moved onto Big-IP and the issue I have hit is that you setup Health Monitors for port 80 and 443, now the 80 check works fine but when I to get the 443 check to look at our file it fails. Now I am aware as I am hitting the this page on the IP address over HTTPS is going to cause a cert error but I would have guessed that BIG-Ip would have been setup just to accept the cert and carry on with the check. Is what I am wanting to do possible? Also is there a way of just using a HTTP monitor for HTTPS? Because if port 80 has stopped sending traffic then if i use the same monitor for 443 it will stop traffic to that. Any help would be great! Thanks

    Read the article

  • Python: Pretty printing a xml file directly from a tar.gz package

    - by EddyR
    This is the first Python script I've tried to create. I'm reading a xml file from a tar.gz package and then I want to pretty print it. However I can't seem to turn it from a file-like object to a string. I've tried to do it a few different ways including str(), tostring(), etc but nothing is working for me. For testing I just tried to print the string at "print myfile[0:200]" and it always generates "<tarfile.ExFileObject object at 0x10053df10>" import os import sys import tarfile from xml.dom.minidom import parseString tar = tarfile.open("data/ucd.all.flat.tar.gz", "r") getfile = tar.extractfile("ucd.all.flat.xml") myfile = str(getfile) print myfile[0:200] output = parseString(getfile).toprettyxml() print output tar.close()

    Read the article

  • Why isn't there a good scheme/lisp on llvm?

    - by anon
    There is Gambit scheme, MIT scheme, PLT scheme, chicken scheme, bigloo, larceny, ...; then there are all the lisps. Yet, there's not (to my knowledge) a single popular scheme/lisp on LLVM, even though LLVM provides lots of nice things like: easier to generate code than x85 easy to make C ffi calls ... So why is it that there isn't a good scheme/lisp on LLVM?

    Read the article

  • Is Reading the Spec Enough?

    - by jozefg
    This question is centered around Scheme but really could be applied to any LISP or programming language in general. Background So I recently picked up Scheme again having toyed with it once or twice before. In order to solidify my understanding of the language, I found the Revised^5 Report on the Algorithmic Language Scheme and have been reading through that along with my compiler/interpreter's (Chicken Scheme) listed extensions/implementations. Additionally, in order to see this applied I have been actively seeking out Scheme code in open source projects and such and tried to read and understand it. This has been sufficient so far for me understanding the syntax of Scheme and I've completed almost all of the Ninety-nine Scheme problems (see here) as well as a decent number of Project Euler problems. Question While so far this hasn't been an issue and my solutions closely match those provided, am I missing out on a great part of Scheme? Or to phrase my question more generally, does reading the specification of a language along with well written code in that language sufficient to learn from? Or are other resources, books, lectures, videos, blogs, etc necessary for the learning process as well.

    Read the article

  • Adding an element to a list in Scheme

    - by user272483
    I'm using R5RS Scheme and I just want to implement a function that returns the intersection of two given lists, but I can't do that because I cannot add an element to a list. Here is my code. How can I fix it? I'm really a beginner in Scheme - this is my first work using Scheme. thx in advance.. (define list3 '()) (define (E7 list1 list2) (cond ((null? list1) list3) ((member (car list1) list2) (append list3 (list (car list1)))) ) (cond ((null? list1) list3) ((not(null? list1)) (E7 (cdr list1) list2) ) ) ) (E7 '(4 5) '(3 4))

    Read the article

  • Inheritance classes in Scheme

    - by DreamWalker
    Now I research OOP-part of Scheme. I can define class in Scheme like this: (define (create-queue) (let ((mpty #t) (the-list '())) (define (enque value) (set! the-list (append the-list (list value))) (set! mpty #f) the-list) (define (deque) (set! the-list (cdr the-list)) (if (= (length the-list) 0) (set! mpty #t)) the-list) (define (isEmpty) mpty) (define (ptl) the-list) (define (dispatch method) (cond ((eq? method 'enque) enque) ((eq? method 'deque) deque) ((eq? method 'isEmpty) isEmpty) ((eq? method 'print) ptl))) dispatch)) (Example from css.freetonik.com) Can I implement class inheritance in Scheme?

    Read the article

  • Suggest resources for learning Scheme.

    - by EmFi
    I'll be starting a new job soon where Scheme is heavily used. I currently do not know Scheme, but my employer assures me that is not a problem. Regardless I'd like to hit the ground running and have a working knowledge of the language before my start date. So I'm looking for good resources from which to learn Scheme. I have had minimal exposure to functional languages. Really only a small chunk of a course devoted to Haskell. But I have a strong background in procedural and OO and procedural languages. Before it gets requested by a commenter, I am competent with the following languages: C, C++, C#, Java, Perl, Python, and Ruby.

    Read the article

  • XML Pretty Printer Missing 2 Key Edge Cases

    - by viatropos
    Using this xslt file found on this blog to pretty print xml using Nokogiri, everything almost works, but to the point where I can't use it for HTML. First, if a node is empty, it turns it into a self closing node, so: <textarea></textarea> gets converted to <textarea/> But that messes up the html tree when rendered. Second, if the node just has text, the text isn't tabbed, and the closing node isn't tabbed, so: <li> <label>some text</label> </li> becomes: <li> <label>some text </label> </li> ...but it would ideally be: <li> <label> some text </label> </li> Does anyone who's pro at XSLT know a quick fix for this?

    Read the article

  • Are there people using scheme out there?

    - by Nick
    Hey, I have just started to study computer sciences at the university where they teach us programming in scheme. Since i have learned c++ for the last 6 years, scheme appears a little odd to me. But they tell me you can write any program you can write in C or Java with it. Is anybody really using this language?

    Read the article

  • Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the HIVE in Big Data Story. In this article we will understand what is PIG and PIG Latin in Big Data Story. Yahoo started working on Pig for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. What is Pig and What is Pig Latin? Pig is a high level platform for creating MapReduce programs used with Hadoop and the language we use for this platform is called PIG Latin. The pig was designed to make Hadoop more user-friendly and approachable by power-users and nondevelopers. PIG is an interactive execution environment supporting Pig Latin language. The language Pig Latin has supported loading and processing of input data with series of transforming to produce desired results. PIG has two different execution environments 1) Local Mode – In this case all the scripts run on a single machine. 2) Hadoop – In this case all the scripts run on Hadoop Cluster. Pig Latin vs SQL Pig essentially creates set of map and reduce jobs under the hoods. Due to same users does not have to now write, compile and build solution for Big Data. The pig is very similar to SQL in many ways. The Ping Latin language provide an abstraction layer over the data. It focuses on the data and not the structure under the hood. Pig Latin is a very powerful language and it can do various operations like loading and storing data, streaming data, filtering data as well various data operations related to strings. The major difference between SQL and Pig Latin is that PIG is procedural and SQL is declarative. In simpler words, Pig Latin is very similar to SQ Lexecution plan and that makes it much easier for programmers to build various processes. Whereas SQL handles trees naturally, Pig Latin follows directed acyclic graph (DAG). DAGs is used to model several different kinds of structures in mathematics and computer science. DAG Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Zookeeper. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Pretty print code to PDF

    - by Joel
    I'm searching for a tool that will take a source directory and produce a single PDF containing the source code, preferably with syntax highlighting. I would like to read the PDF on my phone, in order to get familiar with a code-base, or just to see what I can learn by reading a lot of code. I will most often be reading Ruby. I would prefer if the tool ran on Linux. I don't mind paying for a tool if it is particularly good. Any suggestions?

    Read the article

  • how to pretty print xml from Java

    - by Steve McLeod
    I have a Java String that contains XML, with no line feeds and indentations. I would like to turn in into a String with nicely formatted XML. How do I do this? String unformattedXml = "<tag><nested>hello</nested></tag>"; String formattedXml = new [UnknownClass]().format(unformattedXml); Note: My input is a String. My output is a String.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >