Search Results

Search found 6981 results on 280 pages for 'force flow'.

Page 224/280 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Oracle apresenta resultados do ano

    - by pfolgado
    A Oracle acabou de apresentar os resultados do 4º trimestre e do ano fiscal FY11. Os resultados mais relevantes são: Receitas de Vendas cresceram 33%, atingindo um total de 35,6 mil milhões de dólares Vendas de Novas licenças cresceram 23% Receitas de Hardware de 4,4 mil milhões de dólares Resultados operacionais cresceram 39% Resultados por acção de cresceram 38% para 1,67 dólares “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” Oracle Reports Q4 GAAP EPS Up 34% To 62 Cents; Q4 NON-GAAP EPS Up 25% To 75 Cents Q4 Software New License Sales Up 19%, Q4 Total Revenue Up 13% Oracle today announced fiscal 2011 Q4 GAAP total revenues were up 13% to $10.8 billion, while non-GAAP total revenues were up 12% to $10.8 billion. Both GAAP and non-GAAP new software license revenues were up 19% to $3.7 billion. Both GAAP and non-GAAP software license updates and product support revenues were up 15% to $4.0 billion. Both GAAP and non-GAAP hardware systems products revenues were down 6% to $1.2 billion. GAAP operating income was up 32% to $4.4 billion, and GAAP operating margin was 40%. Non-GAAP operating income was up 19% to $5.2 billion, and non-GAAP operating margin was 48%. GAAP net income was up 36% to $3.2 billion, while non-GAAP net income was up 27% to $3.9 billion. GAAP earnings per share were $0.62, up 34% compared to last year while non-GAAP earnings per share were up 25% to $0.75. GAAP operating cash flow on a trailing twelve-month basis was $11.2 billion. For fiscal year 2011, GAAP total revenues were up 33% to $35.6 billion, while non-GAAP total revenues were up 33% to $35.9 billion. Both GAAP and non-GAAP new software license revenues were up 23% to $9.2 billion. GAAP software license updates and product support revenues were up 13% to $14.8 billion, while non-GAAP software license updates and product support revenues were up 13% to $14.9 billion. Both GAAP and non-GAAP hardware systems products revenues were $4.4 billion. GAAP operating income was up 33% to $12.0 billion, and GAAP operating margin was 34%. Non-GAAP operating income was up 27% to $15.9 billion, and non-GAAP operating margin was 44%. GAAP net income was up 39% to $8.5 billion, while non-GAAP net income was up 34% to $11.4 billion. GAAP earnings per share were $1.67, up 38% compared to last year while non-GAAP earnings per share were up 33% to $2.22. “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” In addition, Oracle also announced that its Board of Directors declared a quarterly cash dividend of $0.06 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 13, 2011, with a payment date of August 3, 2011.

    Read the article

  • UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD)

    - by swafbrother
    UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD) Hello, I'm having trouble setting up a dual-boot (Ubuntu 12.04 LTS and Windows 8.1) in my ASUS K55VM laptop's hard drive disk (500 GB). I was mostly following tutorials for doing this, but at some point something has gone wrong. Up to now, I have followed these steps: I formatted my HDD into GPT. I clean-installed Windows 8.1. I didn't prevent Windows from choosing the partitions to use and it created these partitions: A Recovery partition (sda1). An EFI System Partition (sda2). A Microsoft Reserved Partition (sda3). A Windows Data Partition or C drive (sda4). I reduced the Windows Data Partition via Windows' Disk Management. I made a bootable USB Stick with Ubuntu 12.04 LTS from ISO, using Universal USB Installer. I created these partitions for Ubuntu: A Boot partition, mounted at /boot (sda5). A Root partition, mounted at / (sda6). A Swap partition (sda7). In Device for boot loader installation I chose: /dev/sda. Then, when I rebooted, it went straight into Ubuntu. So I installed Boot-Repair, and clicked on Recommended Repair. It automatically did its job without asking for anything. I rebooted and Grub showed up, with a lot of options. At this point I had a decent dual-boot setup; Ubuntu and both Windows entries worked fine: Ubuntu. Windows Boot UEFI Loader. Windows UEFI bkpbootmgfw.efi. I executed this command: sudo grub-install --force /dev/sda5. Then I tried to make Windows 8.1's Boot Manager the main boot manager, so that I could choose which OS to boot into from a menu. I downloaded EasyBCD on Windows. It showed 2 Ubuntu entries and 1 Windows entry. I went into BCD Deployment tab and clicked on Write MBR. At this point, I went into BIOS and made Windows Boot Manager the first boot option. When I rebooted, I got a black screen with the message efidisk read error, and then (I guess) it switched to the next boot option, which is Ubuntu, resulting in Grub showing up. From Grub, Ubuntu entry is working and so are both Windows entries. If I choose Ubuntu, it normally boots into Ubuntu. But if I choose Windows, it goes into Windows' boot manager. In Windows' boot manager, a menu shows up: Ubuntu. Ubuntu. Windows 8.1. If I choose Windows, it boots into Windows without any problem. If I choose Ubuntu, it boots into Grub (back to step 14). Here's my BootInfo Summary: http://paste.ubuntu.com/6698171/ Windows Boot Manager is clearly not working as expected; I can't directly boot into it and I can't boot into it from BIOS either (efidisk read error again). If I want to boot into Windows I need to boot into Grub first, which is the opposite of what I wanted. I need help at this point. What is the best thing I can do? Is there a more reliable and/or simpler way of acomplishing a satisfying dual-boot for this situation? Can someone provide a way for going back to step 8, where I had a more efficient dual-boot setup? If only I could undo what I did with Easy BCD and skip Windows' Boot Menu... Can someone provide a way to fix this mess? Thanks in advance and sorry for the length of this, I wanted to be exhaustive.

    Read the article

  • SQL SERVER – Performance Tuning Resolution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by MidnightDBAs. Taking resolutions is such an interesting subject. I think just like records, these are broken way more often. I find this is the funniest thing as we all take resolutions every year but not every year, we can manage to keep them. Well, does it mean we should not take resolutions? In fact I support resolutions. Every year, I take a resolution that I will strive reduce my body weight and I usually manage to keep eating healthy till the end of January. When February begins, I begin to loose focus from my goal and as March starts, the “As usual” eating habits begin. Looking at the positive side, what would happen if every year I do not eat healthy in January, I think that might cause terrible consequences to my health in the long run. So keeping resolutions is a good practise and following them to the extent one can is commendable. Let us come back to the world of SQL Server. What is my resolution for year 2011 for SQL Server? There are many, I am going to list three of very important resolutions that I have taken this new year over here. To understand SQL Server Performance Tuning at a deeper Level I think I am already half way through. I have been being very much busy during any given month doing hands-on performance tuning for at least 12 days on an average. That means, I am doing this activity for almost doing 2 weeks a month. I believe that I have a good understanding of the subject. Note that the word that I have used is “good,” and not “best.” There are often cases when I am stumped, and I have no clue of what to do next. Then, I usually go for my “trial and error” method - whichever method works, I make sure to keep a note on my blog. My goal is that I should never ever go for the trial and error method again to achieve the same solution. I should know the solution right away when I see the problem. I do understand that Performance Tuning can be a strange animal at times and one cannot guess the right step every time. However, aiming a high goal never hurts and I am going to learn more and more in this focused area. Going further from Basic BI understanding I do fairly decent with BI concepts. I know the nbasics of SSIS, SSRS, SSAS, PowerPivot and SharePoint (and few other things MDS, StreamInsight, etc). However, I still consider myself as a beginner. I do not have hands-on experience like many other BI Gurus around. I think I want to take my learning further in this direction. I do not want to be a BI expert as the first step but the goal is to move ahead from basic level towards an advanced level. I am going to start presenting in User Group Sessions and other places on this subject. When I have to prepare new subject for presentations, I think I force myself to learn more. I am committed to learn a bit more in this direction. Learning new features SQL Server 2011 Denali This is new thing from “Microsoft” for all the SQL Geeks. I am eagerly waiting for final product later this year and I am planning to learn it well. I think if I follow my above two goals, I think this goal will be automatically covered. I am eager and excited for this new offering from Microsoft. I guess, these are my resolutions; may be next year about the same time, I must revisit this post and see how much successful I am in following my goal. On a lighter note, I am particularly fan of following cartoon strip (Courtesy: Calvin and Hobbes). I think when we cannot resolve our resolutions, we tend to act like Calvin. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Two small issues with Windows Phone 7 ApplicationBar buttons (and workaround)

    - by Laurent Bugnion
    When you work with the ApplicationBar in Windows Phone 7, you notice very fast that it is not quite a component like the others. For example, the ApplicationBarIconButton element is not a dependency object, which causes issues because it is not possible to add attached properties to it. Here are two other issues I stumbled upon, and what workaround I used to make it work anyway. Finding a button by name returns null Since the ApplicationBar is not in the tree of the Silverlight page, finding an element by name fails. For example consider the following code: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/edit.png" Click="EditButtonClick" x:Name="EditButton"/> <shell:ApplicationBarIconButton IconUri="/Resources/cancel.png" Click="CancelButtonClick" x:Name="CancelButton"/> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> with private void EditButtonClick( object sender, EventArgs e) { CancelButton.IsEnabled = false; // Fails, CancelButton is always null } The CancelButton, even though it is named through an x:Name attribute, and even though it appears in Intellisense in the code behind, is null when it is needed. To solve the issue, I use the following code: public enum IconButton { Edit = 0, Cancel = 1 } public ApplicationBarIconButton GetButton( IconButton which) { return ApplicationBar.Buttons[(int) which] as ApplicationBarIconButton; } private void EditButtonClick( object sender, EventArgs e) { GetButton(IconButton.Cancel).IsEnabled = false; } Updating a Binding when the icon button is clicked In Silverlight, a Binding on a TextBox’s Text property can only be updated in two circumstances: When the TextBox loses the focus. Explicitly by placing a call in code. In WPF, there is a third option, updating the Binding every time that the Text property changes (i.e. every time that the user types a character). Unfortunately this option is not available in Silverlight). To select option 1, 2 (and in WPF, 3), you use the Mode property of the Binding class. The issue here is that pressing a button on the ApplicationBar does not remove the focus from the TextBox where the user is currently typing. If the button is a Save button, this is super annoying: The Binding does not get updated on the data object, the object is saved anyway with the old state, and noone understands what just happened. In order to solve this, you can make sure that the Binding is updated explicitly when the button is pressed, with the following code: private void SaveButtonClick(object sender, EventArgs e) { // Force update binding first var binding = MessageTextBox.GetBindingExpression( TextBox.TextProperty); binding.UpdateSource(); // Property was updated for sure, now we can save var vm = DataContext as MainViewModel; vm.Save(); } Obviously this is less maintainable than the usual way to do things in Silverlight. So be careful when using the ApplicationBar and remember that it is not a Silverlight element like the others!! Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Delving into design patterns, and what that means for the Oracle user experience

    - by Kathy.Miedema
    By Kathy Miedema, Oracle Applications User Experience George Hackman, Senior Director, Applications User Experiences The Oracle Applications User Experience team has some exciting things happening around Fusion Applications design patterns. Because we’re hoping to have some new offerings soon (stay tuned with VoX to see what’s in the pipeline around Fusion Applications design patterns), now is a good time to talk more about what design patterns can do for the individual user as well as the entire company. George Hackman, Senior Director of Operations User Experience, says the first thing to note is that user experience is not just about the user interface. It’s about understanding how people do things, observing them, and then finding the patterns that emerge. The Applications UX team develops those patterns and then builds them into Oracle applications. What emerges, Hackman says, is a consistent, efficient user experience that promotes a productive workplace. Creating design patterns What is a design pattern in the context of enterprise software? “Every day, people use technology to get things done,” Hackman says. “They navigate a virtual world that reaches from enterprise to consumer apps, and from desktop to mobile. This virtual world is constantly under construction. New areas are being developed and old areas are being redone. As this world is being built and remodeled, efficient pathways and practices emerge. “Oracle's user experience team watches users navigate this world. We measure their productivity and ask them about their satisfaction. We take the most efficient, most productive pathways from the enterprise and consumer world and turn them into Oracle's user experience patterns.” Hackman describes the process as combining all of the best practices from every part of a user’s world. Members of the user experience team observe, analyze, design, prototype, and measure each work task to find the best possible pattern for a particular work flow. As the team builds the patterns, “we make sure they are fully buildable using Oracle technology,” Hackman said. “So customers know they can use these patterns. There’s no need to make something up from scratch, not knowing whether you can even build it.” Hackman says that creating something on a computer is a good example of a user experience pattern. “People are creating things all the time,” he says. “On the consumer side, they are creating documents. On the enterprise side, they are creating expense reports. On a mobile phone, they are creating contacts. They are using different apps like iPhone or Facebook or Gmail or Oracle software, all doing this creation process.” The Applications UX team starts their process by observing how people might create something. “We observe people creating things. We see the patterns, we analyze and document, then we apply them to our products. It might be different from phone to web browser, but we have these design patterns that create a consistent experience across platforms, and across products, too. The result for customers Oracle constantly improves its part of the virtual world, Hackman said. New products are created and existing products are upgraded. Because Oracle builds user experience design patterns, Oracle's virtual world becomes both more powerful and more familiar at the same time. Because of design patterns, users can navigate with ease as they embrace the latest technology – because it behaves the way they expect it to. This means less training and faster adoption for individual users, and more productivity for the business as a whole. Hackman said Oracle gives customers and partners access to design patterns so that they can build in the virtual world using the same best practices. Customers and partners can extend applications with a user experience that is comfortable and familiar to their users. For businesses that are integrating different Oracle applications, design patterns are key. The user experience created in E-Business Suite should be similar to the user experience in Fusion Applications, Hackman said. If a user is transitioning from one application to the other, it shouldn’t be difficult for them to do their work. With design patterns, it isn’t. “Oracle user experience patterns are the building blocks for the virtual world that ensure productivity, consistency and user satisfaction,” Hackman said. “They are built for the enterprise, but incorporate the best practices from across the virtual world. They empower productivity and facilitate social interaction. When you build with patterns, you get all the end-user benefits of less training / retraining from the finished product. You also get faster / cheaper development.” What’s coming? You can already access design patterns to help you build Dashboards with OBIEE here. And we promised you at the beginning that we had something in the pipeline on Fusion Applications design patterns. Look for the announcement about when they are available here on VoX.

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • NetBeans PHP Community Council

    - by Tomas Mysik
    Hi all, today we would like to inform all of you that now you have a chance to improve NetBeans via NetBeans PHP Community Council. The author of this activity is Timur Poperecinii and he would like to tell you a few words about it. Hello passionate technical people, First of all let me introduce myself: my name is Timur, I’m a developer from Moldova (that little country between Romania and Ukraine), I develop mostly in .NET and JQuery, but I love to learn more, not being an expert I am familiar with Java (Struts2, Play), PHP (Symfony2), Ruby (Rails), Sencha Touch 2 and other technologies. I was “introduced” in PHP recently by a client of mine who requested to make the work specifically in PHP. Let me tell you a little story about my experience with open source and IDEs: when I was studying in university in 2007 I think, I did a simple little application in PHP and thought “Damn, if only there was a good IDE for PHP so I could relax and no having to remember all the function names”, then when I searched on internet pretty much everyone was using Vim or Emacs on Linux, but it had no autocomplete anyway, just syntax highlighting. I remember using some tool like Notepad++ I think. Nowadays everything changed, we have highlighting and autocomplete for about all standard things in PHP in many IDEs. I use NetBeans for PHP, and I really am happy with the experience I have there with standard PHP code, but for frameworks I still think there is lots of room for improvements. For example we have some Symfony 2 and Twig support. But I’d love to see more of that coming, for example I’m a big fan of file templates, where the main goal is to not waste time on writing over and over again something that can be generated, and it counts even more when you don’t have a lot of autocomplete. So what I thought, “Hey I know Java a little, and NetBeans has plugins, so may be it worth trying to do a file templates plugin”, and so I did, you can find details about my Unified Udevi Symfony2 Plugin for NetBeans 7.2 on my blog. It wasn’t hard, and it even was fun! Give back to open source Now think a little, NetBeans is an open source project and PHP support is just a part of it, so the resources are pretty limited in this area. But we as a community that uses this product, want to have the best possible experience with PHP and frameworks(!!!). So why don’t we GIVE BACK TO OPENSOURCE ? Imagine an IDE that can do all the things you wanted + it is free. Now how far is NetBeans from that point? I guess not so far – you might miss a little niche thing that you use on a daily basis, but then the question appears why don’t you make it happen on your own? NetBeans PHP Community Council What I proposed is to create a NetBeans PHP Community Council that will be formed of people willing to change something, willing to create plugins for their own needs and for the needs of the community, test the plugins created by them too, and basically evolve NetBeans in direction they want to reach. I already talked with the NetBeans PHP team. They are only happy to help this Council, with technical advises, opening some APIs we might need to have access to, and other things. One important thing to mention is that this Council is a Community project, so though we’ll have direct discussions with NetBeans PHP Dev team, NetBeans is not the leading force here, it is the community. You can see more details about the goals and structure I proposed at NetBeans PHP Community Council wiki page. We use this mail list: [email protected] for discussions and topics related to the Council. How can I join To join the NetBeans PHP Community Council please send an email to [email protected] with the subject of the mail starting with [Council New Member]. You can subscribe to this mail list here:http://netbeans.org/projects/php/lists. in your mail please indicate your location, age and experience both in Java and PHP. I need these data to assign you to a team. A response will be send to you with your next assignment and some people to contact. I really hope that you’ll make a step forward and try to make your everyday use of NetBeans even more fun.

    Read the article

  • Oracle Tutor: Installing Is Not Implementing or Why CIO's should care about End User Adoption

    - by emily.chorba(at)oracle.com
    Eighteen months ago I showed Tutor and UPK Productive Day One overview to a CIO friend of mine. He works in a manufacturing business which had been recently purchased by a global conglomerate. He had a major implementation coming up, but said that the corporate team would be coming in to handle the project. I asked about their end user training approach, but it was unclear to him at the time. We were in touch over the course of the implementation project. The major activities were data conversion, how-to workshops, General Ledger realignment, and report definition. The message was "Here's how we do it at corporate, and here's how you are going to do it." In short, it was an application software installation. The corporate team had experience and confidence and the effort through go-live was smooth. Some weeks after cutover, problems with customer orders began to surface. Orders could not be fulfilled in a timely fashion. The problem got worse, and the corporate emergency team was called in. After many days of analysis, the issue was tracked down and resolved, but by then there were weeks of backorders, and their customer base was impacted in a significant way. It took three months of constant handholding of customers by the sales force for good will to be reestablished, and this itself diminished a new product sales push. I learned of these results in a recent conversation with the CIO. I asked him what the solution to the problem was, and he replied that it was twofold. The first component was a lack of understanding by customer service reps about how a particular data item in order entry was to be filled in, resulting in discrepant order data. The second component was that product planners were using this data, along with data from other sources, to fill in a spreadsheet based on the abandoned system. This spreadsheet was the primary input for planning data. The result of these two inaccuracies was that key parts were not being ordered to effectively meet demand and the lead time for finished goods was pushed out by weeks. I reminded him about the Productive Day One approach, and it's focus on methodology and tools for end user training. A more collaborative solution workshop would have identified proper applications use in the new environment. Using UPK to document correct transaction entry would have provided effective guidelines to the CSRs for data entry. Using Oracle Tutor to document the manual tasks would have eliminated the use of an out of date spreadsheet. As we talked this over, he said, "I wish I knew when I started what I know now." Effective end user adoption is the most critical and most overlooked success factor in applications implementations. When the switch is thrown at go-live, employees need to know how to use the new systems to do their jobs. Their jobs are made up of manual steps and systems steps which must be performed in the right order for the implementing organization to operate smoothly. Use Tutor to document the manual policies and procedures, use UPK to document the systems tasks, and develop this documentation in conjunction with a solution workshop. This is the path to develop effective end user training material for a smooth implementation. Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Chuck Jones, Product Manager, Oracle Tutor and BPM

    Read the article

  • SQLAuthority News – Random Thoughts and Random Ideas

    - by pinaldave
    There are days when I keep on wondering about SQL, and even my life overall. Today is Saturday so I decided to write about SQL Server. Just like any other mornings, I woke up at 5 and opened my blog editor. I usually do not open Twitter or Facebook when I am planning to focus on my work, as they are little distractions for me. But today I opened my Twitter account and came across a very interesting quote from a friend: ‘Can I expect you to be different today?’ Well, I think it was very powerful quote for me to read first thing on a new day. This quote froze me for a while and made me think, “Do I really want to write about an SQL Server tip, or something different?”  After a little thinking, I’ve realized that for today I would go on and write something different. I am going to write about a few of the ideas and thoughts I had yesterday. After writing all these, I realized that if I am thinking so much in a day, and if I write a blog post of my random musing of the week or month, it can be so long (and boring). Here are some of my random thoughts I’d like to share with you: When the airplane lands, why does everybody get up and try to rush out when their luggage would be coming probably 20-30 minutes later? I really do not like this question when it was asked to me: “SQL Server is not using optimal index which I just created – how can I force it?” I am not going elaborate on this statement but you are allowed to in the comment section. Why do some people wish Good Morning even when they meet us after 4 PM? Can I optimize a query so much that it gives me a result before I execute it? Is it corruption when someone does their personal household work at office? The lane where I drive is always the slowest lane. Why waste time on correcting others when there are a lot of pending improvements for ourselves? If I have to get Tattoo, which SQL Server Execution Plan symbol should I get? Why do I reach office so early that the coffee machine is yet running its daily cleaning job? Why does every laptop have a ‘Page Up’ key at different locations on the keyboard? While I like color movies, I really appreciate black and white photographs. I do not appreciate statements like, “If I receive your books in PDF, I will spread it to many people to give you much greater exposure. So would you please send them to me ASAP?” Do not tell me, “Why does the database grow back after shrinking it every day?” I suggest you use “Search this blog” for the explanation. Petrol prices are currently at INR 74. I hope the rate remains there. Let me ask you the same question which started my day today:  “Can I expect you to be different today?” Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Adding Output Caching and Expire Header in IIS7 to improve performance

    - by Renso
    The problem: Images and other static files will not be cached unless you tell it to. In IIS7 it is remarkably easy to do this. Web pages are becoming increasingly complex with more scripts, style sheets, images, and Flash on them. A first-time visit to a page may require several HTTP requests to load all the components. By using Expires headers these components become cacheable, which avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often associated with images, but they can and should be used on all page components including scripts, style sheets, and Flash. Every time a page is loaded, every image and other static content like JavaScript files and CSS files will be reloaded on every page request. If the content does not change frequently why not cache it and avoid the network traffic?! The solution: In IIS7 there are two ways to cache content, using the web.config file to set caching for all static content, and in IIS7 itself setting aching by file extension that gives you that extra level of granularity. Web.config: In IIS7, Expires Headers can be enabled in the system.webServer section of the web.config file:   <staticContent>     <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" />   </staticContent> In the above example a cache expiration of 1 day was added. It will be a full day before the content is downloaded from the web server again. To expire the content on a specific date:   <staticContent>     <clientCache cacheControlMode="UseExpires" httpExpires="Sun, 31 Dec 2011 23:59:59 UTC" />   </staticContent> This will expire the content on December 31st 2011 one second before midnight. Issues/Challenges: Once the file has been set to be cached it wont be updated on the user's browser for the set cache expiration. So be careful here with content that may change frequently, like during development. Typically in development you don't want to cache at all for testing purposes. You could also suffix files with timestamp or versions to force a reload into the user's browser cache. IIS7 Expire Web Content Open up your web app in IIS. Open up the sub-folders until you find the folder or file you want to ad an expiration date to. In IIS6 you used to right-click and select properties, no such luck in IIS7, double click HTTP Response. Once the window loads for the HTTP Response Headers, look to the Actions navigation bar to the right, all the way at the top select SET COMMON HEADERS. The Enable HTTP keep-alive will already be pre-selected. Go ahead and add the appropriate expiration header to the file or folder. Note that if you selected a folder, it will apply that setting to all images inside that folder and all nested content, even subfolders. So, two approaches, depending on what level or granularity you need.

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • Getting a handle on mobile data

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} written by Ashok Joshi The proliferation of mobile devices in the corporate world is both a blessing as well as a challenge.  Mobile devices improve productivity and the velocity of business for the end users; on the other hand, IT departments need to manage the corporate data and applications that run on these devices. Oracle Database Mobile Server (DMS for short) provides a simple and effective way to deal with the management challenge.  DMS supports data synchronization between a central Oracle database server and data on mobile devices.  It also provides authentication, encryption and application and device management.  Finally, DMS is a highly scalable solution that can be used to manage hundreds of thousands of devices.   Here’s a simplified outline of how such a solution might work. Each device runs local sync and mgmt agents that handle bidirectional data flow with an Oracle enterprise backend, run remote commands, and provide status to the management console. For example, mobile admins could monitor multiple networks of mobile devices, upgrade their software remotely, and even destroy the local database on a compromised device. DMS supports either Oracle Berkeley DB or SQLite for device-local storage, and runs on a wide variety of mobile platforms. The schema for the device-local database is pretty simple – it contains the name of the application that’s installed on the device as well as details such as product name, version number, time of last access etc. Each mobile user has an account on the monitoring system.  DMS supports authentication via the Oracle database authentication mechanisms or alternately, via an external authentication server such as Oracle Identity Management. DMS also provides the option of encrypting the data on disk as well as while it is being synchronized. Whenever a device connects with DMS, it sends the list of all local application changes to the server; the server updates the central repository with this information.  Synchronization can be triggered on-demand, whenever there’s a change on the device (e.g. new application installed or an existing application removed) or via a rule-based schedule (e.g. every Saturday). Synchronization is very fast and efficient, since only the changes are propagated.  This includes resume capability; should synchronization be interrupted for any reason, the next synchronization will resume where the previous synchronization was interrupted. If the device should be lost or stolen, DMS has the capability to remove the applications and/or data from the device. This ability to control access to sensitive data and applications is critical in the corporate environment. The central repository also allows the IT manager to track the kinds of applications that mobile users use and recommend patches and upgrades, while still allowing the mobile user full control over what applications s/he downloads and uses on the device.  This is useful since most devices are used for corporate as well as personal information. In certain restricted use scenarios, the IT manager can also control whether a certain application can be installed on a mobile device.  Should an unapproved application be installed, it can easily be removed the next time the device connects with the central server. Oracle Database mobile server provides a simple, effective and highly secure and scalable solution for managing the data and applications for the mobile workforce.

    Read the article

  • Blog Buzz - Devoxx 2011

    - by Janice J. Heiss
    Some day I will make it to Devoxx – for now, I’m content to vicariously follow the blogs of attendees and pick up on what’s happening.  I’ve been doing more blog "fishing," looking for the best commentary on 2011 Devoxx. There’s plenty of food for thought – and the ideas are not half-baked.The bloggers are out in full, offering useful summaries and commentary on Devoxx goings-on.Constantin Partac, a Java developer and a member of Transylvania JUG, a community from Cluj-Napoca/Romania, offers an excellent summary of the Devoxx keynotes. Here’s a sample:“Oracle Opening Keynote and JDK 7, 8, and 9 Presentation•    Oracle is committed to Java and wants to provide support for it on any device.•    JSE 7 for Mac will be released next week.•    Oracle would like Java developers to be involved in JCP, to adopt a JSR and to attend local JUG meetings.•    JEE 7 will be released next year.•    JEE 7 is focused on cloud integration, some of the features are already implemented in glassfish 4 development branch.•    JSE 8 will be release in summer of 2013 due to “enterprise community request” as they can not keep the pace with an 18    month release cycle.•    The main features included in JSE8 are lambda support, project Jigsaw, new Date/Time API, project Coin++ and adding   support for sensors. JSE 9 probably will focus on some of these features:1.    self tuning JVM2.    improved native language integration3.    processing enhancement for big data4.    reification (adding runtime class type info for generic types)5.    unification of primitive and corresponding object classes6.    meta-object protocol in order to use type and methods define in other JVM languages7.    multi-tenancy8.    JVM resource management” Thanks Constantin! Ivan St. Ivanov, of SAP Labs Bulgaria, also commented on the keynotes with a different focus.  He summarizes Henrik Stahl’s look ahead to Java SE 8 and JavaFX 3.0; Cameron Purdy on Java EE and the cloud; celebrated Java Champion Josh Bloch on what’s good and bad about Java; Mark Reinhold’s quick look ahead to Java SE 9; and Brian Goetz on lambdas and default methods in Java SE 8. Here’s St. Ivanov’s account of Josh Bloch’s comments on the pluses of Java:“He started with the virtues of the platform. To name a few:    Tightly specified language primitives and evaluation order – int is always 32 bits and operations are executed always from left  to right, without compilers messing around    Dynamic linking – when you change a class, you need to recompile and rebuild just the jar that has it and not the whole application    Syntax  similarity with C/C++ – most existing developers at that time felt like at home    Object orientations – it was cool at that time as well as functional programming is today    It was statically typed language – helps in faster runtime, better IDE support, etc.    No operator overloading – well, I’m not sure why it is good. Scala has it for example and that’s why it is far better for defining DSLs. But I will not argue with Josh.”It’s worth checking out St. Ivanov’s summary of Bloch’s views on what’s not so great about Java as well. What's Coming in JAX-RS 2.0Marek Potociar, Principal Software Engineer at Oracle and currently specification lead of Java EE RESTful web services API (JAX-RS), blogged on his talk about what's coming in JAX-RS 2.0, scheduled for final release in mid-2012.  Here’s a taste:“Perhaps the most wanted addition to the JAX-RS is the Client API, that would complete the JAX-RS story, that is currently server-side only. In JAX-RS 2.0 we are adding a completely interface-based and fluent client API that blends nicely in with the existing fluent response builder pattern on the server-side. When we started with the client API, the first proposal contained around 30 classes. Thanks to the feedback from our Expert Group we managed to reduce the number of API classes to 14 (2 of them being exceptions)! The resulting is compact while at the same time we still managed to create an API that reflects the method invocation context flow (e.g. once you decide on the target URI and start setting headers on the request, your IDE will not try to offer you a URI setter in the code completion). This is a subtle but very important usability aspect of an API…” Obviously, Devoxx is a great Java conference, one that is hitting this year at a time when much is brewing in the platform and beginning to be anticipated.

    Read the article

  • Goto for the Java Programming Language

    - by darcy
    Work on JDK 8 is well-underway, but we thought this late-breaking JEP for another language change for the platform couldn't wait another day before being published. Title: Goto for the Java Programming Language Author: Joseph D. Darcy Organization: Oracle. Created: 2012/04/01 Type: Feature State: Funded Exposure: Open Component: core/lang Scope: SE JSR: 901 MR Discussion: compiler dash dev at openjdk dot java dot net Start: 2012/Q2 Effort: XS Duration: S Template: 1.0 Reviewed-by: Duke Endorsed-by: Edsger Dijkstra Funded-by: Blue Sun Corporation Summary Provide the benefits of the time-testing goto control structure to Java programs. The Java language has a history of adding new control structures over time, the assert statement in 1.4, the enhanced for-loop in 1.5,and try-with-resources in 7. Having support for goto is long-overdue and simple to implement since the JVM already has goto instructions. Success Metrics The goto statement will allow inefficient and verbose recursive algorithms and explicit loops to be replaced with more compact code. The effort will be a success if at least twenty five percent of the JDK's explicit loops are replaced with goto's. Coordination with IDE vendors is expected to help facilitate this goal. Motivation The goto construct offers numerous benefits to the Java platform, from increased expressiveness, to more compact code, to providing new programming paradigms to appeal to a broader demographic. In JDK 8, there is a renewed focus on using the Java platform on embedded devices with more modest resources than desktop or server environments. In such contexts, static and dynamic memory footprint is a concern. One significant component of footprint is the code attribute of class files and certain classes of important algorithms can be expressed more compactly using goto than using other constructs, saving footprint. For example, to implement state machines recursively, some parties have asked for the JVM to support tail calls, that is, to perform a complex transformation with security implications to turn a method call into a goto. Such complicated machinery should not be assumed for an embedded context. A better solution is just to expose to the programmer the desired functionality, goto. The web has familiarized users with a model of traversing links among different HTML pages in a free-form fashion with some state being maintained on the side, such as login credentials, to effect behavior. This is exactly the programming model of goto and code. While in the past this has been derided as leading to "spaghetti code," spaghetti is a tasty and nutritious meal for programmers, unlike quiche. The invokedynamic instruction added by JSR 292 exposes the JVM's linkage operation to programmers. This is a low-level operation that can be leveraged by sophisticated programmers. Likewise, goto is a also a low-level operation that should not be hidden from programmers who can use more efficient idioms. Some may object that goto was consciously excluded from the original design of Java as one of the removed feature from C and C++. However, the designers of the Java programming languages have revisited these removals before. The enum construct was also left out only to be added in JDK 5 and multiple inheritance was left out, only to be added back by the virtual extension method methods of Project Lambda. As a living language, the needs of the growing Java community today should be used to judge what features are needed in the platform tomorrow; the language should not be forever bound by the decisions of the past. Description From its initial version, the JVM has had two instructions for unconditional transfer of control within a method, goto (0xa7) and goto_w (0xc8). The goto_w instruction is used for larger jumps. All versions of the Java language have supported labeled statements; however, only the break and continue statements were able to specify a particular label as a target with the onerous restriction that the label must be lexically enclosing. The grammar addition for the goto statement is: GotoStatement: goto Identifier ; The new goto statement similar to break except that the target label can be anywhere inside the method and the identifier is mandatory. The compiler simply translates the goto statement into one of the JVM goto instructions targeting the right offset in the method. Therefore, adding the goto statement to the platform is only a small effort since existing compiler and JVM functionality is reused. Other language changes to support goto include obvious updates to definite assignment analysis, reachability analysis, and exception analysis. Possible future extensions include a computed goto as found in gcc, which would replace the identifier in the goto statement with an expression having the type of a label. Testing Since goto will be implemented using largely existing facilities, only light levels of testing are needed. Impact Compatibility: Since goto is already a keyword, there are no source compatibility implications. Performance/scalability: Performance will improve with more compact code. JVMs already need to handle irreducible flow graphs since goto is a VM instruction.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer

    - by Elton Stoneman
    This is the second in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Part 2 is nice and easy. From Part 1 we exposed our service over the Azure Service Bus Relay using the netTcpRelayBinding and verified we could set up our network to listen for relayed messages. Assuming we want to consume that service in .NET from an environment which is fairly unrestricted for us, but quite restricted for attackers, we can use netTcpRelay and shared secret authentication. Pattern applicability This is a good fit for scenarios where: the consumer can run .NET in full trust the environment does not restrict use of external DLLs the runtime environment is secure enough to keep shared secrets the service does not need to know who is consuming it the service does not need to know who the end-user is So for example, the consumer is an ASP.NET website sitting in a cloud VM or Azure worker role, where we can keep the shared secret in web.config and we don't need to flow any identity through to the on-premise service. The service doesn't care who the consumer or end-user is - say it's a reference data service that provides a list of vehicle manufacturers. Provided you can authenticate with ACS and have access to Service Bus endpoint, you can use the service and it doesn't care who you are. In this post, we’ll consume the service from Part 1 in ASP.NET using netTcpRelay. The code for Part 2 (+ Part 1) is on GitHub here: IPASBR Part 2 Authenticating and authorizing with ACS In this scenario the consumer is a server in a controlled environment, so we can use a shared secret to authenticate with ACS, assuming that there is governance around the environment and the codebase which will prevent the identity being compromised. From the provider's side, we will create a dedicated service identity for this consumer, so we can lock down their permissions. The provider controls the identity, so the consumer's rights can be revoked. We'll add a new service identity for the namespace in ACS , just as we did for the serviceProvider identity in Part 1. I've named the identity fullTrustConsumer. We then need to add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus (see Part 1 for a walkthrough creating Service Idenitities): Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: fullTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send This sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. Adding a Service Reference The Part 2 sample client code is ready to go, but if you want to replicate the steps, you’re going to add a WSDL reference, add a reference to Microsoft.ServiceBus and sort out the ServiceModel config. In Part 1 we exposed metadata for our service, so we can browse to the WSDL locally at: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc?wsdl If you add a Service Reference to that in a new project you'll get a confused config section with a customBinding, and a set of unrecognized policy assertions in the namespace http://schemas.microsoft.com/netservices/2009/05/servicebus/connect. If you NuGet the ASB package (“windowsazure.servicebus”) first and add the service reference - you'll get the same messy config. Either way, the WSDL should have downloaded and you should have the proxy code generated. You can delete the customBinding entries and copy your config from the service's web.config (this is already done in the sample project in Sixeyed.Ipasbr.NetTcpClient), specifying details for the client:     <client>       <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                 behaviorConfiguration="SharedSecret"                 binding="netTcpRelayBinding"                 contract="FormatService.IFormatService" />     </client>     <behaviors>       <endpointBehaviors>         <behavior name="SharedSecret">           <transportClientEndpointBehavior credentialType="SharedSecret">             <clientCredentials>               <sharedSecret issuerName="fullTrustConsumer"                             issuerSecret="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/>             </clientCredentials>           </transportClientEndpointBehavior>         </behavior>       </endpointBehaviors>     </behaviors>   The proxy is straight WCF territory, and the same client can run against Azure Service Bus through any relay binding, or directly to the local network service using any WCF binding - the contract is exactly the same. The code is simple, standard WCF stuff: using (var client = new FormatService.FormatServiceClient()) { outputString = client.ReverseString(inputString); } Running the sample First, update Solution Items\AzureConnectionDetails.xml with your service bus namespace, and your service identity credentials for the netTcpClient and the provider:   <!-- ACS credentials for the full trust consumer (Part2): -->   <netTcpClient identityName="fullTrustConsumer"                 symmetricKey="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/> Then rebuild the solution and verify the unit tests work. If they’re green, your service is listening through Azure. Check out the client by navigating to http://localhost:53835/Sixeyed.Ipasbr.NetTcpClient. Enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • jqGrid - customizing the multi-select option (restrict single selection and adding custom events)

    - by Renso
    Goal: Using the jgGrid to enable a selection of a checkbox for row selection - which is easy to set in the jqGrid - but also only allowing a single row to be selectable at a time while adding events based on whether the row was selected or de-selected. Environment: jQuery 1.4.4 jqGrid 3.4.4a Issue: The jqGrid does not support the option to restrict the multi-select to only allow for a single selection. You may ask, why bother with the multi-select checkbox function if you only want to allow for the selection of a single row? Good question, as an example, you want to reserve the selection of a row to trigger another kind of event and use the checkbox multi-select to handle a different kind of event; in other words, when I select the row I want something entirely different to happen than when I select to check off the checkbox for that row. Also the setSelection method of the jqGrid is a toggle and has no support for determining whether the checkbox has already been selected or not, So it will simply act as a switch - which it is designed to do - but with no way out of the box to only check off the box (as in not to de-select) rather than act like a switch. Furthermore, the getGridParam('selrow') does not indicate if the row was selected or de-selected, which seems a bit strange and is the main reason for this blog post. Solution: How this will act: When you check off a multi-select checkbox in the gird, and then commence to select another row by checking off that row's multi-select checkbox - I'm not talking there about clicking on the row but using the grid's multi-select checkbox - it will de-select the previous selection so that you are always left with only a single selection. Furthermore, once you select or de-select a multi-select checkbox, fire off an event that will be determined by whether or not the row was selected or de-selected, not just merely clicked on. So if I de-select the row do one thing but when selecting it do another. Implementation (this of course is only a partial code snippet):             multiselect: true,             multiboxonly: true,             onSelectRow: function (rowId) {                 var gridSelRow = $(item).getGridParam('selrow');                 var s;                 s = $(item).getGridParam('selarrrow');                 if (!s || !s[0]) {                     $(item).resetSelection();                     $('#productLineDetails').fadeOut();                     lastsel = null;                     return;                 }                 var selected = $.inArray(rowId, s) != -1;                 if (selected) {                     $('#productLineDetails').show();                 }                 else {                     $('#productLineDetails').fadeOut();                 }                 if (rowId && rowId !== lastsel && selected) {                     $(item).GridToForm(gridSelRow, '#productLineDetails');                     if (lastsel) $(item).setSelection(lastsel, false);                 }                 lastsel = rowId;             }, In the example code above: The "item" property is the id of the jqGrid. The following to settings ensure that the jqGrid will add the new column to select rows with a checkbox and also the not allow for the selection by clicking on the row but to force the user to have to click on the multi-select checkbox to select the row: multiselect: true, multiboxonly: true, Unfortunately the var gridSelRow = $(item).getGridParam('selrow') function will only return the row the user clicked on or rather that the row's checkbox was clicked on and NOT whether or not it was selected nor de-selected, but it retrieves the row id, which is what we will need. The following piece get's all rows that have been selected so far, as in have a checked off multi-select checkbox: var s; s = $(item).getGridParam('selarrrow'); Now determine if the checkbox the user just clicked on was selected or de-selected: var selected = $.inArray(rowId, s) != -1; If it was selected then show a container "#productLineDetails", if not hide that container away. The following instruction populates a form with the grid data using the built-in GridToForm method (just mentioned here as an example) ONLY if the row has been selected and NOT de-selected but more importantly to de-select any other multi-select checkbox that may have been selected: if (rowId && rowId !== lastsel && selected) {                     $(item).GridToForm(gridSelRow, '#productLineDetails');                     if (lastsel) $(item).setSelection(lastsel, false); }

    Read the article

  • Terminal Colors Not Working

    - by Matt Fordham
    I am accessing an Ubuntu 10.04.2 LTS server via SSH from OSX. Recently the colors stopped working. I think it happened while I was installing/troubleshooting RVM, but I am not positive. In .bashrc I uncommeneted force_color_prompt=yes, and when I run env | grep TERM I get TERM=xterm-color. But still no colors. Any ideas? Thanks! Here is the output of cat .bashrc # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' alias dir='dir --color=auto' alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi [[ -s "/usr/local/rvm/scripts/rvm" ]] && . "/usr/local/rvm/scripts/rvm"

    Read the article

  • How to "apt-get -f install" without deleting software?

    - by Jeggy
    I know Guitar pro doesn't support 64 bit, but i did get it to work with this command jeggy@jeggy-XPS:~$ sudo dpkg --force-architecture -i GuitarPro6-rev9063.deb [sudo] password for jeggy: Selecting previously unselected package guitarpro6:i386. (Reading database ... 285729 files and directories currently installed.) Unpacking guitarpro6:i386 (from GuitarPro6-rev9063.deb) ... dpkg: dependency problems prevent configuration of guitarpro6:i386: guitarpro6:i386 depends on gksu. dpkg: error processing guitarpro6:i386 (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: guitarpro6:i386 And even after i get that error the program perfectly works fine and updating and adding PPA's to the system works great, but when I'm trying to install some other software i get this error: jeggy@jeggy-XPS:~$ sudo apt-get install elinks Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: elinks : Depends: libfsplib0 (>= 0.9) but it is not going to be installed Depends: liblua50 (>= 5.0.3) but it is not going to be installed Depends: liblualib50 (>= 5.0.3) but it is not going to be installed Depends: libtre5 but it is not going to be installed Depends: elinks-data (= 0.12~pre5-7ubuntu1) but it is not going to be installed guitarpro6:i386 : Depends: gksu:i386 but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And whenever i write "apt-get -f install" i get this jeggy@jeggy-XPS:~$ sudo apt-get -f install [sudo] password for jeggy: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dconf-gsettings-backend:i386 python-levenshtein python-indicate libav-tools libstartup-notification0:i386 libxmuu1:i386 libavfilter-extra-2 libbabl-0.0-0 libgegl-0.0-0 libgconf2-4:i386 python-vobject libgtk-3-0:i386 libpam-cap:i386 python-utidylib libdconf0:i386 python-iniparse python-xmpp libpam-gnome-keyring:i386 libxcb-util0:i386 python-farstream Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: guitarpro6:i386 0 upgraded, 0 newly installed, 1 to remove and 7 not upgraded. 1 not fully installed or removed. After this operation, 84,0 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 286979 files and directories currently installed.) Removing guitarpro6:i386 ... dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/updater' not empty so not removed. dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/Data/Soundbanks' not empty so not removed. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... And now Guitar Pro is deleted. How can i install Guitar Pro and still be able to install other software afterwards?

    Read the article

  • SQLAuthority News – Stay Connected and Social Media

    - by pinaldave
    I think I have finally gotten back my faith in social media. If you are following my blog I am sure you are aware of my views on social media – SQLAuthority News – Social Media Confusion – Twitter, FaceBook, LinkedIn and Me. I was not happy about how social media was evolving. Whenever I go to Twitter, LinkedIn or Facebook, I noticed the same updates everywhere. I just thought I was wasting my time doing the same thing everywhere. I strongly believe that there is no dictator on internet. Nobody has authority over others, everybody can express their ideas as long as it is not violating others privacy and it is not morally wrong. I have decided that instead of trying to improve the world, I should change myself and adjust my needs. Here are few things I have done to relieve my social media confusion. Twitter I un-followed people who were taking up my time with too many updates. I un-followed people who hardly updated at all. I did not follow anybody else’s list, as I have no control over who other people follow. I follow not only serious SQL people but some fun stuff as well. I removed all my friends who were on Facebook and repeating the same updates on Twitter. I engage with them on Facebook. I followed people who are very conversational on Twitter. I let anybody follow me. I update all my blog posts through at least five tweets online. I decided to re-tweet at least five of my favorite tweets of the day, this way I force myself to remain active in the community. Follow me on Twitter! LinkedIn I updated my career and professional info on LinkedIn. I keep my LinkedIn profile updated with my latest jobs and career news. I let anybody connect with me on LinkedIn. I specify my email address in my profile, keeping it easy for those who want to add me. I read all the profile related updates of my connections – it is very valuable to know who is where and what changes are happening. I do not add my personal tweets or comments in LinkedIn profile. I just keep it professional. Link with me at LinkedIn Facebook I use Facebook only for personal friends. I visit all of my friends at regular intervals and make sure that they are really my friends. I often remove my friends from my Twitter list who are sending duplicate updates. I upload my family photos as well as family updates on Facebook, making sure that only my approved friends are able to read my updates. I keep my Facebook very personal and I often chat with my friends on Facebook chat. I am no longer confused about social media and I think I am using it appropriately. As I said, one cannot decide for others how to use social media, you can only decide for yourself. I have finally found my peace with social media. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • What Counts For A DBA: ESP

    - by Louis Davidson
    Now I don’t want to get religious here, and I’m not going to, but what I’m going to describe in this ‘What Counts for a DBA’ installment sometimes feels like magic. Often  I will spend hours thinking about the solution to a design issue or coding problem, working diligently to try to come up with a solution and then finally just give up with the feeling that I’m not even qualified to be a data entry clerk, much less a data architect.  At this point I often take a walk (or sometimes a nap), and then it hits me. I realize that I have the answer just sitting in my brain, ready to implement.  This phenomenon is not limited to walks either; it can happen almost any time after I stop my obsession about a problem. I call this phenomena ESP (or Extra-Sensory Programming.)  Another term for this could be ‘sleeping on it’, and while the idiom tends to mean to let time pass to actively think about a problem, sleeping on a problem also lets you relax and let your brain do the work. I first noticed this back in my college days when I would play video games for hours on end. We would get stuck deep in some dungeon unable to find a way out, playing for days on end until we were beaten down tired. Once we gave up and walked away, the solution would usually be there waiting for one of us before we came back to play the next day.  Sometimes it would be in the form of a dream, and sometimes it would just be that the problem was now easy to solve when we started to play again.  While it worked great for video games, it never occurred when I studied English Literature for hours on end, or even when I worked for the same sort of frustrating hours attempting to solve a homework problem in Calculus.  I believe that the difference was that I was passionate about the video game, and certainly far less so about homework where people used the word “thou” instead of “you” or x to represent a number. This phenomenon occurs somewhat more often in my current work as a professional data programmer, because I am very passionate about SQL and love those aspects of my career choice.  Every day that I get to draw a new data model to solve a customer issue, or write a complex SELECT statement to ferret out the answer to a complex data question, is a great day. I hope it is the same for any reader of this blog.  But, unfortunately, while the day on a whole is great, a heck of a lot of noise is generated in work life. There are the typical project deadlines, along with the requisite project manager sitting on your shoulders shouting slogans to try to make you to go faster: Add in office politics, and the occasional family issues that permeate the mind, and you lose the ability to think deeply about any problem, not to mention occasionally forgetting your own name.  These office realities coupled with a difficult SQL problem staring at you from your widescreen monitor will slowly suck the life force out of your body, making it seem impossible to solve the problem This is when the walk starts; or a nap. Maybe you hide from the madness under your desk like George Costanza hides from Steinbrenner on Seinfeld.  Forget about the problem. Free your mind from the insanity of the problem and your surroundings. Then let your training and education deep in your brain take over and see if it will passively do the rest for you. If you don’t end up with a solution, the worst case scenario is that you have a bit of exercise or rest, and you won’t have heard the phrase “better is the enemy of good enough” even once…which certainly will do your brain some good. Once you stop expecting whipping your brain for information, inspiration may just strike and instead of a humdrum solution you find a solution you hadn’t even considered, almost magically. So, my beloved manager, next time you have an urgent deadline and you come across me taking a nap, creep away quietly because I’m working, doing some extra-sensory programming.

    Read the article

  • How To Get Web Site Thumbnail Image In ASP.NET

    - by SAMIR BHOGAYTA
    Overview One very common requirement of many web applications is to display a thumbnail image of a web site. A typical example is to provide a link to a dynamic website displaying its current thumbnail image, or displaying images of websites with their links as a result of search (I love to see it on Google). Microsoft .NET Framework 2.0 makes it quite easier to do it in a ASP.NET application. Background In order to generate image of a web page, first we need to load the web page to get their html code, and then this html needs to be rendered in a web browser. After that, a screen shot can be taken easily. I think there is no easier way to do this. Before .NET framework 2.0 it was quite difficult to use a web browser in C# or VB.NET because we either have to use COM+ interoperability or third party controls which becomes headache later. WebBrowser control in .NET framework 2.0 In .NET framework 2.0 we have a new Windows Forms WebBrowser control which is a wrapper around old shwdoc.dll. All you really need to do is to drop a WebBrowser control from your Toolbox on your form in .NET framework 2.0. If you have not used WebBrowser control yet, it's quite easy to use and very consistent with other Windows Forms controls. Some important methods of WebBrowser control are. public bool GoBack(); public bool GoForward(); public void GoHome(); public void GoSearch(); public void Navigate(Uri url); public void DrawToBitmap(Bitmap bitmap, Rectangle targetBounds); These methods are self explanatory with their names like Navigate function which redirects browser to provided URL. It also has a number of useful overloads. The DrawToBitmap (inherited from Control) draws the current image of WebBrowser to the provided bitmap. Using WebBrowser control in ASP.NET 2.0 The Solution Let's start to implement the solution which we discussed above. First we will define a static method to get the web site thumbnail image. public static Bitmap GetWebSiteThumbnail(string Url, int BrowserWidth, int BrowserHeight, int ThumbnailWidth, int ThumbnailHeight) { WebsiteThumbnailImage thumbnailGenerator = new WebsiteThumbnailImage(Url, BrowserWidth, BrowserHeight, ThumbnailWidth, ThumbnailHeight); return thumbnailGenerator.GenerateWebSiteThumbnailImage(); } The WebsiteThumbnailImage class will have a public method named GenerateWebSiteThumbnailImage which will generate the website thumbnail image in a separate STA thread and wait for the thread to exit. In this case, I decided to Join method of Thread class to block the initial calling thread until the bitmap is actually available, and then return the generated web site thumbnail. public Bitmap GenerateWebSiteThumbnailImage() { Thread m_thread = new Thread(new ThreadStart(_GenerateWebSiteThumbnailImage)); m_thread.SetApartmentState(ApartmentState.STA); m_thread.Start(); m_thread.Join(); return m_Bitmap; } The _GenerateWebSiteThumbnailImage will create a WebBrowser control object and navigate to the provided Url. We also register for the DocumentCompleted event of the web browser control to take screen shot of the web page. To pass the flow to the other controls we need to perform a method call to Application.DoEvents(); and wait for the completion of the navigation until the browser state changes to Complete in a loop. private void _GenerateWebSiteThumbnailImage() { WebBrowser m_WebBrowser = new WebBrowser(); m_WebBrowser.ScrollBarsEnabled = false; m_WebBrowser.Navigate(m_Url); m_WebBrowser.DocumentCompleted += new WebBrowserDocument CompletedEventHandler(WebBrowser_DocumentCompleted); while (m_WebBrowser.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); m_WebBrowser.Dispose(); } The DocumentCompleted event will be fired when the navigation is completed and the browser is ready for screen shot. We will get screen shot using DrawToBitmap method as described previously which will return the bitmap of the web browser. Then the thumbnail image is generated using GetThumbnailImage method of Bitmap class passing it the required thumbnail image width and height. private void WebBrowser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser m_WebBrowser = (WebBrowser)sender; m_WebBrowser.ClientSize = new Size(this.m_BrowserWidth, this.m_BrowserHeight); m_WebBrowser.ScrollBarsEnabled = false; m_Bitmap = new Bitmap(m_WebBrowser.Bounds.Width, m_WebBrowser.Bounds.Height); m_WebBrowser.BringToFront(); m_WebBrowser.DrawToBitmap(m_Bitmap, m_WebBrowser.Bounds); m_Bitmap = (Bitmap)m_Bitmap.GetThumbnailImage(m_ThumbnailWidth, m_ThumbnailHeight, null, IntPtr.Zero); } One more example here : http://www.codeproject.com/KB/aspnet/Website_URL_Screenshot.aspx

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • Create a Smoother Period Close

    - by Get Proactive Customer Adoption Team
    Untitled Document Do You Use Oracle E-Business Suite Products Involved in Accounting Period Closes? We understand that closing the periods in your system at the end of an accounting period enables your company to make the right business decisions. We also know this requires prior preparation, good procedures, and quality data. To help you meet that need, Oracle E-Business Suite’s proactive support team developed the Period Close Advisor to help your organization conduct a smooth period close for its Oracle E-Business Suite 12 products. The Period Close Advisor is composed of logical steps you can follow, aligned by the business requirement flow. It will help with an orderly close of the product sub-ledgers before posting to the General Ledger. It combines recommendations and industry best practices with tips from subject matter experts for troubleshooting. You will find patches needed and references to assist you during each phase. Get to know the E-Business Suite Period Close Advisor The Period Close Advisor does more than help the users of Oracle E-Business Suite products close their period. You can use it before and throughout the period to stay on track. Proactively it assists you as you set up your company’s period close process. During the period, it helps evaluate your system’s readiness for initiating the period close procedures and prepare the system for a smooth period close experience. The Period Close Advisor gets you to answers when you have questions and gives you the latest news from us on Oracle E-Business Suite’s period close. The Period Close Advisor is the right place to start. How to Use the E-Business Suite Period Close The Period Close Advisor graphically guides you through your period close. The tabs show you the products (also called applications or sub-ledgers) covered, and the product order required for the processing to handle any dependencies between the products. Users of all the products it covers can benefit from the information it contains. Structure of the Period Close Advisor Clicking on a tab gives you the details for that particular step in the process. This includes an overview, showing how the products fit into the overall period close process, and step-by-step information on each phase needed to complete the period close for the tab. You will also find multimedia training and related resources you can access if you need more information. Once you click on any of the phases, you see guidance for that phase. This can include: Tips from the subject-matter experts—here are examples from a Cash Management specialist: “For organizations with high transaction volumes bank statements should be loaded and reconciled on a daily basis.” “The automatic reconciliation process can be set up to create miscellaneous transactions automatically.” References to useful Knowledge Base documents: Information Centers for the products and features FAQs on functionality Known Issues and patches with both the errors and their solutions How-to documents that explain in detail how to use a feature or complete a process White papers that give overview of a feature, list setup required to use the feature, etc. Links to diagnosticsthat help debug issues you may find in a process Additional information and alerts about a process or reports that can help you prevent issues from surfacing This excerpt from the “Process Transaction” phase for the Receivables product lists documents you’ll find helpful. How to Get Started with the Period Close Advisor The Period Close Advisor is a great resource that can be used both as a proactive tool (while setting up your period end procedures) and as the first document to refer to when you encounter an issue during the period close procedures! As mentioned earlier, the order of the product tabs in the Period Close Advisor gives you the recommended order of closing. The first thing to do is to ensure that you are following the prescribed order for closing the period, if you are using more than one sub-ledger. Next, review the information shared in the Evaluate and Prepare and Process Transactions phases. Make sure that you are following the recommended best practices; you have applied the recommended patches, etc. The Reconcile phase gives you the recommended steps to follow for reconciling a sub-ledger with the General Ledger. Ensure that your reconciliation procedure aligns with those steps. At any stage during the period close processing, if you encounter an issue, you can revisit the Period Close Advisor. Choose the product you have an issue with and then select the phase you are in. You will be able to review information that can help you find a solution to the issue you are facing. Stay Informed Oracle updates the Period Close Advisor as we learn of new issues and information. Bookmark the Oracle E-Business Suite Period Close Advisor [ID 335.1] and keep coming back to it for the latest information on period close

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >