Search Results

Search found 27233 results on 1090 pages for 'information quality'.

Page 236/1090 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • Schemas and tables versus user-ids in a single table using PostgreSQL

    - by gvkv
    I'm developing a web app and I've come to a fork in the road with respect to database structure and I don't know which direction to take. I have a database with user information that I can structure one of two ways. The first is to create a schema and a set of tables for each user (duplicating the structure for each user) and the second is to create a single set of tables and query information based on user-id. Suppose 100000 users. Here are my questions: Considering security, performance, scalability and administration where does each choice lie? Would the answers change for 1000000 or 10000? Is there a set of best practices that lead to one choice or the other? It seems to me that multiple schemas are more secure since it's trivial to restrict user privileges but what about performance and scalability? Administration seems like a wash since dumping (and restoring) lots of schemas isn't any more difficult than dumping a few.

    Read the article

  • Little Wheel Is An Atmospheric and Engaging Point-and-Click Adventure

    - by Jason Fitzpatrick
    If you’re a fan of the resurgence of highly stylized and atmospheric adventure games–such as Spirit, World of Goo, and the like–you’ll definitely want to check out this well executed, free, and more than a little bit charming browser-based game. Little Wheel is set in a world of robots where, 10,000 years ago, a terrible accident at the central power plant left all the robots without power. The entire robot world went into a deep sleep and now, thanks to a freak lightning strike, one little robot has woken up. Your job, as that little robot, is to navigate the world of Little Wheel and help bring it back to life. Hit up the link below to play the game for free–the quality of the visual and audio design make going full screen and turning the speakers on a must. Little Wheel [via Freeware Genuis] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Blank space after file extension -> weird FileInfo behaviour

    - by Axarydax
    Somehow a file has appeared in one of my directories, and it has space at the end of its extension - its name is "test.txt ". The weird thing is that Directory.GetFiles() returns me the path of this file, but I'm unable to retrieve file information with FileInfo class. The error manifests here: DirectoryInfo di = new DirectoryInfo("c:\\somedir"); FileInfo fi = di.GetFileSystemInfos("test*")[0] as FileInfo; //correctly fi.FullName is "c:\somedir\test.txt " //but fi.Exists==false (!) Is FileInfo class broken? Can I somehow retrieve information about this file? I really don't know how did that file appear on my file system, and I am unable to recreate some more of them. All of my attempts to create a new file with this type of extension have failed, but now my program is crashing when encoutering it. I can easily handle the exception when finding the file, but boy am I curious about this!

    Read the article

  • Do search engines directly penalize bad grammar?

    - by Nicolas Raoul
    Let's say I have a web page with user-contributed content, which is good content but with bad grammar, slang terms, inappropriate tone. I know that bad grammar is a also a problem because it drives away visitors and scares people from linking to it, but let's put that aside. Let's also put aside the fact that incorrectly spelt terms might be ignored by a crawler, potentially leading to less text-comparizon hits. QUESTION: Do search engines like Google directly recognize and penalize bad grammar? For instance because they might consider bad-grammar as a sign of low-quality content.

    Read the article

  • Java: JAX-WS passing authentication info to a call to webservice

    - by agnieszka
    I am using JAX-WS. I am connecting to .NET webservice that requires authentication. I first call the Authentication.asmx so that I can be authenticated. The call returns me a LoginResult that contains a cookie name. Then I call another webservice and I need to somehow pass this cookie or a cookie name. and I don't know how. Here is the code: //first service that returns login information Authentication auth = new Authentication(new URL("the_url"), new QName("http://schemas.microsoft.com/sharepoint/soap/", "Authentication")); LoginResult result = auth.getAuthenticationSoap().login(HTTPuserName, HTTPpassword); //i need to pass cookie or cookie name or any other login information to call to this service Copy copyService = new Copy(new URL("service_url"), new QName("http://schemas.microsoft.com/sharepoint/soap/", "Copy")); BindingProvider p = (BindingProvider) copyService.getCopySoap();

    Read the article

  • Zend Framework - no public folder

    - by poru
    Hello, I'm going to host an app on a shared host and there I couldn't create virtual host or change something at apache. Often apps with ZF looks like that: root public index.php .htaccess application library I have sth. like that: root application index.php .htaccess All my code is in the application folder. But there are also some .ini and .xml files with sensitive information e.g. login names and passwords and so on... If I add a .htaccess in the application folder with deny from all is the information secure inside the folder?

    Read the article

  • Printing pdf without opening

    - by Marius
    Hello there, I want to open a PDF-print dialog (not regular browser dialog) without visually opening the document so that I can print a pdf without having the header and footer information from a regular webpage print. I know this is possible, because I have seen it in the past on a postal service website for printing postage labels, but cannot remember where. As far as I know, printing it from an iframe doesnt work, as it only opens regular browser print dialog and gives me ugly page and url information: frames['name'].focus(); frames['name'].print(); Thank you for your time. Kind regards, Marius

    Read the article

  • Upload documents using email and mapping properties of documents

    - by stranger001
    Hi, We have a requirement to upload documents to a specific SharePoint document library when a document is sent via email to a specific email address. I think SharePoint has this feature available. What I would like to know more importantly is mapping the information in the body of the email to the custom properties of the document. For example, if a custom field (property) is added to the document in SharePoint say - "Year". I provide the value of this property within the body of the email and attach the document in email. Is it possible that the provided value in the body of the email for the property is updated in SharePoint when the document is uploaded to SharePoint through email? We are using SharePoint 2007. Appreciate any information on this. Thanks

    Read the article

  • No bass from the speakers

    - by Bhavesh Jogadia
    In Ubuntu 11.10 no bass sound at all when try to play mp3 or 2channels audio. I have 5.1/6 channels speakers. When I test speakers from the sound preference it works perfectly fine and then I try to play any MP3 there is no bass only the speakers work, I play 5.1 movies it plays fine bass sounds good. Also tried to to some changes as instructed with deamon.conf file but no go... When I turn my speakers on play speakers only mode it plays the bass but sound quality is not good compared to normal playing. I have a Creative 5.1 vx ca0160 sound card. In Windows also had the same problem unless I do bass redirection crossover frequency so is there any kinda software package or any kinda changes i can make in system file so that my speaker bass works fine or any thing who can let me change the bass redirection crossover frequency?

    Read the article

  • Does Ubuntu Touch consume less than Android?

    - by Eduard Florinescu
    One of the problems of new OSs is power consumption. That is because power and performance requires a lot of tweaks and experience with the kernel, drivers and OS code-base on one hand, and a lot of extensive long-term test and quality assurance on the other hand. Given that Android is a rather old and established OS I saw that it has pretty good power consumption. Phoronix does this kind of comparissions but I was not able to find to much about Ubuntu Touch. Does Ubuntu Touch consume less than Android in general, do you have data on some platforms compared?

    Read the article

  • Oracle Database 11g Release 2 Now Available on 32-bit and x64 Windows

    - by jenny.gelhausen
    Oracle Database 11g Release 2 provides the foundation for IT to successfully deliver more information with higher quality of service, reduce the risk of change within IT, and make more efficient use of their IT budgets. By deploying Oracle Database 11g Release 2 as their data management foundation, organizations can utilize the full power of the world's leading database. Now Oracle Database 11g Release 2 is available for organizations using 32-bit and x64 Windows. Download either on OTN var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Download a file from one ASP.NET web application to other (given the credentials)

    - by Tom S.
    Hi everybody! Im working on a asp.net 3.5 web application (C#), where i have a file with some information that is updated frequently, and only few accounts can access to it (the application is using the asp.net authentication system, stored in a SQL database). My task is to parse that file, so i made a small parser (another web app) a to show the information in a more friendly way. However, everytime i want to parse it, i need to enter in the application with one of those accounts, download the file, put in the parser's folder. Is there any way to, given the username and password, download the file directly from the parser application and use that one? Thanks in advance

    Read the article

  • Ideas for reducing storage needs and/or costs (lots of images)

    - by James P.
    Hi, I'm the webmaster for a small social network and have noticed that images uploaded by users are taking a big portion of the capacity available. These are mostly JPEGs. What solutions could I apply to reduce storage needs? Is there a way to reduce the size of images without affecting quality too much? Is there a service out there that could be used to store static files at a cheaper price (< 1GB/0.04 eurocents)? Edit: Updated the question.

    Read the article

  • How to Access Data in ZODB

    - by Eric
    I have a Plone site that has a lot of data in it and I would like to query the database for usage statistics; ie How many cals with more than 1 entries, how many blogs per group with entries after a given date, etc. I want to run the script from the command line... something like so: bin/instance [script name] I've been googling for a while now but can't find out how to do this. Also, can anybody provide some help on how to get user specific information. Information like, last logged in, items created. Thanks! Eric

    Read the article

  • How can you print a text file via gedit from the command line?

    - by dan
    I'd like to use gedit or some similar program just as a page formatter and pipe some text through it and onto the printer. | lpr just doesn't cut it in the presentation department. The printed output is subpar, even if I try to tinker with the margin and font size options. But I like the way text looks like when printed from gedit. Is there a way to have the best of both worlds and use a command line pipeline to print a text file with gedit-quality formatting?

    Read the article

  • Appending Dataset in Core Data execution of Update through iTuneStore

    - by Yoon Lee
    So I have completed my code work. This is first time releasing the app through iTuneStore. Current state of reading Core Data (.sqlite) file is already prefetched (already has information like apple's 'Reciepie' program). Assuming I have successfully released through apple store, and decide to update my application to existing users. Say I have sqlite contents but it contains bit more information than previous SQLite file under same structure. Question 1. Every time update held to the existing user, does it removes previous ones and move new updated application? Question 2. if it is not, then HOW can I append the existing sql value?

    Read the article

  • Drawing the UIPicker values from multiple components?

    - by Rob
    I have the UIPicker setup with multiple components and a button below it. Depending on what the user has chosen with the UIPicker determines which new view will be loaded but I am having trouble determining how to extrapolate the information from the picker itself. Right now I have this method being called when the button is pressed: - (IBAction) buttonPressed { if (component:1 == 1 && component:2 == 1) { //Load the view number 1. } else if (component:1 == 2 && component:2 == 1) { //Load the view number 2. } else { //Load the view number 3. } } I obviously know that my code is wrong but I hope it gets the point across. I have multiple components and I need to figure out how to use the information that the user is scrolling to on the picker to determine which view to move to. (I know how to load the views, I just commented those in the code to illuminate the problem areas better.)

    Read the article

  • Is paying programmers to "test" for bugs normal? [on hold]

    - by user106277
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. I also wanted them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I have never heard of this, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period... Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • C++ Win32 Unhandled Exception Handler

    - by uray
    currently I used SetUnhandledExceptionFilter() to provide callback to get information when an unhandled exception was occurred, that callback will provides me with EXCEPTION_RECORD which provides ExceptionAddress. [1]what is actually ExceptionAddress is? does it the address of function / code that gives exception, or the memory address that some function tried to access? [2]is there any better mechanism that could give me better information when unhandled exception occured? (I can't use debug mode or add any code that affect runtime performance, since crash is rare and only on release build when code run as fast as possible) [3]is there any way for me to get several callstack address when unhandled exception occured. [4]suppose ExceptionAddress has address A, and I have DLL X loaded and executed at base address A-x, and some other DLL Y at A+y, is it good to assume that crash was PROBABLY caused by code on DLL X?

    Read the article

  • Which development Language is best suited to Network Inventory

    - by dastardlyandmuttley
    Dear stackoverflow I hope this is the corrcet type of question for stackoverflow to consider I would like to develop a "Hard Core" application that performs Network Inventory. High level requirements are Work on Windows and UNIX networks it has to be extremly performant it has to be 100% accuarate (massively) scalable and fun to write The sort of details I am after is manufacturer and versions of all major workstation hardware components such as motherboard, network card, sound card, hard drives, optical drives, memory, BIOS details, operating system information etc. I dont want to have to distribute a client on each workstation to collect the information although i will require automatic worksattion discovery I would value your thoughts on the best development language to employ I know there are products such as NEWT and stuff like nmap... I would like to do this type of technical programming myself "from scratch" Warm Regards DD

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • Using a SMTP Service for email

    - by Josh S.
    This may be a horribly obvious question, but I'm learning and just need someone to confirm it for me. I putting together a private social network that needs to email their members (through the social network software, Elgg) regularly. I'm hosting it on a shared HostGator plan (because they won't receive much traffic) and they'll email 10-1000 emails a few times a week. HostGator restricts you to 500 per hour. I'm also worried about deliverability. I've been searching up and down about how to throttle the emails so it will all send reliably... but then I came across the idea of an outside SMTP relay service. Would using an SMTP service resolve this issue? If so, any opinions on quality SMTP services?

    Read the article

  • Is php|architect any good?

    - by Andrew Heath
    Kind of a hard topic to search for, as architect turns up a lot about software architects instead. After 8 months of PHP self-study, I finally stumbled across the php|architect site. The length of time it took me to find it makes me suspicious of its quality. 3 related questions: do professional PHP coders read/care about php|architect? is it a good source for PHP beginners? assuming yes to either of the above, how far back in the archives to articles remain relevant? (ex: does stuff written about PHP4 still matter?)

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >