Search Results

Search found 16902 results on 677 pages for 'strange errors'.

Page 611/677 | < Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >

  • The Top Ten Security Top Ten Lists

    - by Troy Kitch
    As a marketer, we're always putting together the top 3, or 5 best, or an assortment of top ten lists. So instead of going that route, I've put together my top ten security top ten lists. These are not only for security practitioners, but also for the average Joe/Jane; because who isn't concerned about security these days? Now, there might not be ten for each one of these lists, but the title works best that way. Starting with my number ten (in no particular order): 10. Top 10 Most Influential Security-Related Movies Amrit Williams pulls together a great collection of security-related movies. He asks for comments on which one made you want to get into the business. I would have to say that my most influential movie(s), that made me want to get into the business of "stopping the bad guys" would have to be the James Bond series. I grew up on James Bond movies: thwarting the bad guy and saving the world. I recall being both ecstatic and worried when Silicon Valley-themed "A View to A Kill" hit theaters: "An investigation of a horse-racing scam leads 007 to a mad industrialist who plans to create a worldwide microchip monopoly by destroying California's Silicon Valley." Yikes! 9. Top Ten Security Careers From movies that got you into the career, here’s a top 10 list of security-related careers. It starts with number then, Information Security Analyst and ends with number one, Malware Analyst. They point out the significant growth in security careers and indicate that "according to the Bureau of Labor Statistics, the field is expected to experience growth rates of 22% between 2010-2020. If you are interested in getting into the field, Oracle has many great opportunities all around the world.  8. Top 125 Network Security Tools A bit outside of the range of 10, the top 125 Network Security Tools is an important list because it includes a prioritized list of key security tools practitioners are using in the hacking community, regardless of whether they are vendor supplied or open source. The exhaustive list provides ratings, reviews, searching, and sorting. 7. Top 10 Security Practices I have to give a shout out to my alma mater, Cal Poly, SLO: Go Mustangs! They have compiled their list of top 10 practices for students and faculty to follow. Educational institutions are a common target of web based attacks and miscellaneous errors according to the 2014 Verizon Data Breach Investigations Report.    6. (ISC)2 Top 10 Safe and Secure Online Tips for Parents This list is arguably the most important list on my list. The tips were "gathered from (ISC)2 member volunteers who participate in the organization’s Safe and Secure Online program, a worldwide initiative that brings top cyber security experts into schools to teach children ages 11-14 how to protect themselves in a cyber-connected world…If you are a parent, educator or organization that would like the Safe and Secure Online presentation delivered at your local school, or would like more information about the program, please visit here.” 5. Top Ten Data Breaches of the Past 12 Months This type of list is always changing, so it's nice to have a current one here from Techrader.com. They've compiled and commented on the top breaches. It is likely that most readers here were effected in some way or another. 4. Top Ten Security Comic Books Although mostly physical security controls, I threw this one in for fun. My vote for #1 (not on the list) would be Professor X. The guy can breach confidentiality, integrity, and availability just by messing with your thoughts. 3. The IOUG Data Security Survey's Top 10+ Threats to Organizations The Independent Oracle Users Group annual survey on enterprise data security, Leaders Vs. Laggards, highlights what Oracle Database users deem as the top 12 threats to their organization. You can find a nice graph on page 9; Figure 7: Greatest Threats to Data Security. 2. The Ten Most Common Database Security Vulnerabilities Though I don't necessarily agree with all of the vulnerabilities in this order...I like a list that focuses on where two-thirds of your sensitive and regulated data resides (Source: IDC).  1. OWASP Top Ten Project The Online Web Application Security Project puts together their annual list of the 10 most critical web application security risks that organizations should be including in their overall security, business risk and compliance plans. In particular, SQL injection risks continues to rear its ugly head each year. Oracle Audit Vault and Database Firewall can help prevent SQL injection attacks and monitor database and system activity as a detective security control. Did I miss any?

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • apt-get install was interrupted

    - by user3475299
    I am new to Ubuntu. I got the following lines after an interrupted apt-get install. Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic update-initramfs: Generating /boot/initrd.img-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic /usr/sbin/grub-mkconfig: 14: /etc/default/grub: nouveau.modeset=0: not found run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.13.0-29-generic.postinst line 1025. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: error processing package linux-image-3.13.0-29-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-extra-3.13.0-29-generic: linux-image-extra-3.13.0-29-generic depends on linux-image-3.13.0-29-generic; however: Package linux-image-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-image-extra-3.13.0-29-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.13.0-29-generic; however: Package linux-image-3.13.0-29-generic is not configured yet. linux-image-generic depends on linux-image-extra-3.13.0-29-generic; however: Package linux-image-extra-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.13.0.29.35); however: Package linux-image-generic is not configured yet. dpkg: error processing package linux-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-image-extra-3.13.0-27-generic: linux-image-extra-3.13.0-27-generic depends on linux-image-3.13.0-27-generic; however: Package linux-image-3.13.0-27-generic is not configured yet. dpkg: error processing package linux-image-extra-3.13.0-27-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-3.13.0-29-generic: linux-signed-image-3.13.0-29-generic depends on linux-image-3.13.0-29-generic (= 3.13.0-29.53); however: Package linux-image-3.13.0-29-generic is not configured yet. linux-signed-image-3.13.0-29-generic depends on linux-image-extra-3.13.0-29-generic (= 3.13.0-29.53); however: Package linux-image-extra-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-signed-image-3.13.0-29-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-generic: linux-signed-image-generic depends on linux-signed-image-3.13.0-29-generic; however: Package linux-signed-image-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-signed-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-generic: linux-signed-generic depends on linux-signed-image-generic (= 3.13.0.29.35); however: Package linux-signed-image-generic is not configured yet. dpkg: error processing package linux-signed-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-3.13.0-27-generic: linux-signed-image-3.13.0-27-generic depends on linux-image-3.13.0-27-generic (= 3.13.0-27.50); however: Package linux-image-3.13.0-27-generic is not configured yet. linux-signed-image-3.13.0-27-generic depends on linux-image-extra-3.13.0-27-generic (= 3.13.0-27.50); however: Package linux-image-extra-3.13.0-27-generic is not configured yet. dpkg: error processing package linux-signed-image-3.13.0-27-generic (--configure): dependency problems - leaving unconfigured Setting up libxkbcommon-x11-0:amd64 (0.4.1-0ubuntu1) ... Setting up libqt5gui5:amd64 (5.2.1+dfsg-1ubuntu14.2) ... Processing triggers for libc-bin (2.19-0ubuntu6) ... Errors were encountered while processing: linux-image-3.13.0-27-generic linux-image-3.13.0-29-generic linux-image-extra-3.13.0-29-generic linux-image-generic linux-generic linux-image-extra-3.13.0-27-generic linux-signed-image-3.13.0-29-generic linux-signed-image-generic linux-signed-generic linux-signed-image-3.13.0-27-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • WordPress SEO Plugins to make your Blog Search Engine Friendly

    - by Vaibhav
    WordPress is the most common blogging system in use today and its use as a CMS is also wide spread. With hundreds of millions of sites using wordpress, getting correct SEO for your WordPress based Blog or Site is very important. We get regular queries from people who want Search Engine Optimisation for their site or blog which is made using wordpress. Here is a list of 16 of the best WordPress Plug-ins That can help you achieve better rankings: All in one SEO Pack This is most popular plugin among all SEO plugins for WordPress. It is easy to use and is compatible with most of the WordPress plugins. It works as a complete package of SEO plugin – automatically generating META tags and optimizing search engines for your titles and avoiding duplicate content. You can also include META tags manually (Met title, Meta description and Met keywords) for all pages and post in your website. HeadSpace2 HeasSpace2 is available in different languages , you can manage a wide range of SEO Tasks related with meta data, you can tag your posts, Custom descriptions and titles. So your page can rank the created relevancy on Search engines and you can load different settings for different pages. Platinum SEO plugin Automatic 301 redirects permalink changes, META tags generation, avoids duplicate content, and does SEO optimization of post and page titles and a lots of other features. TGFI.net SEO WordPress Plugin It’s a modified version of all-in-one SEO Pack. It has some unique feature over All-in-one SEO plugin, It generate titles, meta descriptions and meta keywords automatically when overrides are not present. Google XML Sitemaps Sitemaps Generated by this tool are supported by  Google,  Yahoo,  Bing, and Ask. We all know Sitemaps make indexing of web pages easier for web crawlers. Crawlers can retrieve complete structure of site and more information by sitemaps. They notify all major search engines about new posts every time you create a new post. Sitemap Generator You can generate highly customizable sitemap for your WordPress page. You can choose what to show and what not to show, you can list the items in your choice of orde. It supports pages and permalinks and multi-level categories. SEO Slugs They can generate more search engine friendly URLs for your site. Slugs are filename assigned to your post , this plugin removes all  common words like ‘a’, ‘the’, ‘in’, ‘what’, ‘you’ from slug which are assigned automatically to your post. SEO Post Links This is a similar plugin to SEO Slug, it removes unnecessary keywords from slug to make it short and SEO friendly and you can fix the number of characters in your post. Automatic SEO links With this tool you can create auto linking in your post. You can use this tool for inter linking or external linking too. Just select your words, anchor text target URL nature of links ( Do fallow / No follow ). This plugin will replace the matches found in post, WP Backlinks A helpful plugin for link exchange , whenever any webmaster submits a link for link exchange, the plugin will spider webmasters site for reciprocal link, and if everything is found good , your link will be exchanged. SEO Title Tag You can optimize your Title  tags of  Word press blog through this plugin . You can also override the title tag with custom titles , mass editing and title tags for 404 pages which are the main feature of this plugin. 404 SEO plugin With this Plugin you can customize 404 page of your site; you can give customized error message and links to relevant pages of your site. Redirection A powerful plugins to manage 301 redirection and logs related with redirection, with this plugin you can track 404 errors and track the log of all redirected URLs , this plugin can redirect  post automatically when URL changes for that post. AddToAny This plugin helps your readers to share, save, email and bookmark your posts and pages. It supports more than a hundred social bookmarking , networking and sharing sites. SEO Friendly Images You can make SEO friendly images available on your site with the help of this tool. It updates images with proper titles and ALT tags. Robots Meta A plugin which prevents Search engines to index comments on your post, login and admin pages. It also allows to add tags for individual pages.

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Improve the Quality of ePub eBooks with Sigil

    - by Matthew Guay
    Would you like to correct errors in your ePub formatted eBooks, or even split them into chapters and create a Table of Contents?  Here’s how you can with the free program Sigil. eBooks are increasingly popular with the rise of eBook readers and reading apps on mobile devices.  We recently showed you how to convert a PDF eBook to ePub format, but as you may have noticed, sometimes the converted file had some glitches or odd formatting.  Additionally, many of the many free ePub books available online from sources like the Project Guttenberg do not include a table of contents.  Sigil is a free application for Windows, OS X, and Linux that lets you edit ePub files, so let’s look at how you can use it to improve your eBooks. Note: Sigil took several moments to open files in our tests, and froze momentarily when we maximized the window.  Sigil is currently pre-release software in active development, so we would expect the bugs to be worked out in future versions.  As usual, only install if you’re comfortable testing pre-release software. Getting Started Download Sigil (link below), making sure to select the correct version for your computer.  Run the installer, and select your preferred setup language when prompted. After a moment the installer will appear; setup as normal. Launch Sigil when it’s finished installing.  It opens with a default blank ePub file, so you could actually start writing an eBook from scratch right here. Edit Your ePub eBooks Now you’re ready to edit your ePub books.  Click Open and browse to the file you want to edit. Now you can double-click any of the HTML or XHTML files on the left sidebar and edit them just like you would in Word. Or you can choose to view it in Code View and edit the actual HTML directly. The sidebar also gives you access to the other parts of the ePub file, such as Images and CSS styles. If your ePub file has a Table of Contents, you can edit it with Sigil as well.  Click Tools in the menu bar, and then select TOC Editor.  Strangely there is no way to create a new table of contents, but you can remove entries from existing one.   Convert TXT Files to ePub Many free eBooks online, especially older, out of copyright titles, are available in plain text format.  One problem with these files is that they usually use hard returns at the end of lines, so they don’t reflow to fill your screen efficiently. Sigil can easily convert these files to the more useful ePub format.  Open the text file in Sigil, and it will automatically reflow the text and convert it ePub.  As you can see in the screenshot below, the text in the eBook does not have hard line-breaks at the end of each line, and will be much more readable on mobile devices. Note that Sigil may take several moments opening the book, and may even become unresponsive while analyzing it.   Now you can edit your eBook, split it into chapters, or just save it as is.  Either way, make sure to select Save as to save your book as ePub format. Conclusion As mentioned before, Sigil seems to run slow at times, especially when editing large eBooks.  But it’s still a nice solution to edit and extend your ePub eBooks, and even convert plain text eBooks to the nicer ePub format.  Now you can make your eBooks work just like you want, and read them on your favorite device! If you feel comfortable editing HTML files, check out our article on how to edit ePub eBooks with your favorite HTML editor. Link Download Sigil from Google Code Download free ePub eBooks from Project Guttenberg Similar Articles Productive Geek Tips Edit ePub eBooks with Your Favorite HTML EditorConvert a PDF eBook to ePub Format for Your iPad, iPhone, or eReaderRead Mobi eBooks on Kindle for PCFriday Fun: Watch HD Video Content with MeevidPreview and Purchase Ebooks with Kindle for PC TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account

    Read the article

  • some files just wont install- linux-image-3.5.0-42-generic

    - by jatin
    It gives the following details of the package operation failing after using the sofware updater on 12.10: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 217566 files and directories currently installed.) Preparing to replace linux-image-3.5.0-36-generic 3.5.0-36.57~precise1 (using .../linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-36-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-36-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic Preparing to replace linux-image-3.5.0-42-generic 3.5.0-42.65~precise1 (using .../linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-42-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-42-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Preparing to replace memtest86+ 4.20-1.1ubuntu1 (using .../memtest86+_4.20-1.1ubuntu2.1_amd64.deb) ... Unpacking replacement memtest86+ ... dpkg: error processing /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb (--unpack): unable to make backup link of `./boot/memtest86+.bin' before installing new version: Operation not permitted No apport report written because MaxReports is reached already locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb Error in function: Output of df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda6 38G 6.5G 30G 19% / udev 3.5G 4.0K 3.5G 1% /dev tmpfs 1.5G 924K 1.5G 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 3.6G 296K 3.6G 1% /run/shm none 100M 64K 100M 1% /run/user /dev/sda1 197M 87M 111M 44% /boot /dev/sda5 243G 971M 230G 1% /home

    Read the article

  • How to build the Darling projrct on Ubuntu 13.10?

    - by mirror27
    The Darling project is an open source Darwin/OS X emulation layer for Linux. I downloaded the source code with git and tried to build it with cmake but it failed. The document says I need these packages: clang 3.1+ GCC 4.6+ (yes, you still need GCC for header files) libkqueue libbsd gnustep-base ("Foundation") gnustep-gui ("Cocoa") gnustep-corebase ("CoreFoundation") libobjc2 libudev openssl libasound libav libgc but I could not find them on apt or in software center. Also cmake showed this result: No build type selected, default to Debug This is a 64-bit build Building ObjC ABI 2 You have called ADD_LIBRARY for library Carbon without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library AppKit without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library auto without any source files. This typically indicates a problem with your CMakeLists.txt file CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: LIBGNUSTEPCOREBASE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBKQUEUE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBOBJC2_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin Configuring incomplete, errors occurred! How can I build the Darling project?

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • "corrupted filesystem tarfile - corrupted package archive" error

    - by Justin
    I was trying to install a program and it said that my dependencies were unmet, and that I should run, sudo apt-get -f install. I have moved everything I didn't need in /etc/apt/sources.list.d/ into the trash. My source.list is all Natty while I am running Oneiric. So maybe I need a new source.list? But here are the things I have: justin@justin-000:~$ sudo apt-get -f install [sudo] password for justin: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-image-3.0.0-13-generic Suggested packages: fdutils linux-doc-3.0.0 linux-source-3.0.0 linux-tools The following NEW packages will be installed: linux-image-3.0.0-13-generic 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. 2 not fully installed or removed. Need to get 0 B/36.5 MB of archives. After this operation, 117 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 270736 files and directories currently installed.) Unpacking linux-image-3.0.0-13-generic (from .../linux-image-3.0.0-13- generic_3.0.0-13.22_i386.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-3.0.0-13 generic_3.0.0-13.22_i386.deb (--unpack): corrupted filesystem tarfile - corrupted package archive No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic run-parts: executing /etc/kernel/postrm.d/zz-extlinux 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic P: Checking for EXTLINUX directory... found. P: Writing config for /boot/vmlinuz-3.0.0-12-generic... P: Writing config for /boot/vmlinuz-2.6.38-11-generic... P: Installing debian theme... done. run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.0.0-13-generic /boot/vmlinuz-3.0.0-13-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.0.0-13-generic_3.0.0-13.22_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) justin@justin-000:~$ sudo apt-get update justin@justin-000:~$ sudo apt-get update Ign dl.google.com stable InRelease Ign dl.google.com stable InRelease Get:1dl.google.com stable Release.gpg [198 B] Ign us.archive.ubuntu.com oneiric InRelease Ign us.archive.ubuntu.com oneiric-security InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Get:2 dl.google.com stable Release.gpg [198 B] Get:3 dl.google.com stable Release [1,347 B] Get:4 dl.google.com stable Release [1,338 B] Hit us.archive.ubuntu.com oneiric Release.gpg Hit us.archive.ubuntu.com oneiric-security Release.gpg Get:5/dl.google.com stable/main i386 Packages [1,220 B] Hit tp://us.archive.ubuntu.com oneiric-updates Release.gpg Ign tp://dl.google.com stable/main TranslationIndex Get:6 tp://dl.google.com stable/main i386 Packages [464 B] Ign ttp://ppa.launchpad.net oneiric InRelease Hit ttp://us.archive.ubuntu.com oneiric Release Ign ttp://ppa.launchpad.net oneiric InRelease Ign ttp://dl.google.com stable/main TranslationIndex Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-security Release Hit ttp://ppa.launchpad.net oneiric Release.gpg Hit ttp://us.archive.ubuntu.com oneiric-updates Release Hit ttp://us.archive.ubuntu.com oneiric/main Sources Hit ttp://us.archive.ubuntu.com oneiric/restricted Sources Hit ttp://us.archive.ubuntu.com oneiric/universe Sources Hit ttp://us.archive.ubuntu.com oneiric/multiverse Sources Hit ttp://us.archive.ubuntu.com oneiric/main i386 Packages Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit ttp://us.archive.ubuntu.com oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit ://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/main Sources Hit ttp://us.archive.ubuntu.com oneiric-security/restricted Sources Hit ttp://ppa.launchpad.net oneiric Release Hit ttp://us.archive.ubuntu.com oneiric-security/universe Sources Hit tp://us.archive.ubuntu.com oneiric-security/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-security/main i386 Packages Hit tp://us.archive.ubuntu.com oneiric-security/restricted i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse i386 Packages Hit htp://ppa.launchpad.net oneiric/main Sources Hit ttp://ppa.launchpad.net oneiric/main i386 Packages Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/main TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric-security/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/restricted TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-security/universe TranslationIndex Ign htp://dl.google.com stable/main Translation-en_US Hit htp://us.archive.ubuntu.com oneiric-updates/main Sources Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit tp://us.archive.ubuntu.com oneiric-updates/universe Sources Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit htp://us.archive.ubuntu.com oneiric-updates/main i386 Packages Ign htp://dl.google.com stable/main Translation-en Hit ttp://ppa.launchpad.net oneiric/main Sources Hit htp://ppa.launchpad.net oneiric/main i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Ign htp://dl.google.com stable/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main TranslationIndex Hit hp://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit ttp://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit htp://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit htp://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Ign htp://dl.google.com stable/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit ttp://us.archive.ubuntu.com oneiric/main Translation-en Hit ttp://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/main Translation-en Hit hp://us.archive.ubuntu.com oneiric-security/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-security/universe Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit htp://us.archive.ubuntu.com oneiric-updates/universe Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htt://ppa.launchpad.net oneiric/main Translation-en Ign htp://ppa.launchpad.net oneiric/main Translation-en_US Ign htp://ppa.launchpad.net oneiric/main Translation-en Fetched 4,765 B in 2s (2,158 B/s) Reading package lists... Done justin@justin-000:~$

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • Plan your SharePoint 2010 Content Type Hub carefully

    - by Wayne
    Currently setting up a new environment on SharePoint 2010 (which was made available for download yesterday if anyone missed that :-). One of the new features of SharePoint 2010 is to set up a Content Type Hub (which is a part of the Metadata Service Application), which is a hub for all Content Types that other Site Collections can subscribe to. That is you only need to manage your content types in one location. Setting up the Content Type Hub is not that difficult but you must make it very careful to avoid a lot of work and troubleshooting. Here is a short tutorial with a few tips and tricks to make it easy for you to get started. Determine location of Content Type Hub First of all you need to decide in which Site Collection to place your Content Type Hub; in the root site collection or a specific one. I think using a specific Site Collection that only acts as a Content Type Hub is the best way, there are no best practice as of now. So I create a new Site Collection, at for instance http://server/sites/CTH/. The top-level site of this site collection should be for instance a Team Site. You cannot use Blank Site by default, which would have been the best option IMHO, since that site does not have the Taxonomy feature stapled upon it (check the TaxonomyFeatureStapler feature for which site templates that can be used). Configure Managed Metadata Service Application Next you need to create your Managed Metadata Service Application or configure the existing one, Central Administration > Application Management > Manage Service Applications. Select the Managed Metadata service application and click Properties if you already have created it. In the bottom of the dialog window when you are creating the service application or when you are editing the properties is a section to fill in the Content Type Hub. In this text box fill in the URL of the Content Type Hub. It is essential that you have decided where your Content Type Hub will reside, since once this is set you cannot change it. The only way to change it is to rebuild the whole managed metadata service application! Also make sure that you enter the URL correctly. I did copy and paste the URL once and got the /default.aspx in the URL which funked the whole service up. Make sure that you only use the URL to the Site Collection of the hub. Now you have to set up so that other Site Collections can consume the content types from the hub. This is done by selecting the connection for the managed metadata service application and clicking properties. A new dialog window opens and there you need to click the Consumes content types from the Content Type Gallery at nnnn. Now you are free to syndicate your Content Types from the Hub. Publish Content Types To publish a Content Type from the hub you need to go to Site Settings > Content Types and select the content type that you would like to publish. Then select Manage publishing for this content type. This takes you to a page from where you can Publish, Unpublish or Republish the content type. Once the content type is published it can take up to an hour for the subscribing Site Collections to get it. This is controlled by the Content Type Subscriber job that is scheduled to run once an hour. To speed up your publishing just go to Central Administration > Monitoring > Review Job Definitions > Content Type Subscriber and click Run now and you content type is very soon available for use. Published Content Type status You can check the status of the content type publishing in your destination site collections by selecting Site Settings > Content Type Publishing. From here you can force a refresh of all subscribed content types, see which ones that are subscribed and finally check the publishing error log. This error log is very useful for detecting errors during the publishing. For instance if you use any features such as ratings, metadata, document ids in your content type hub and your destination site collection does not have those features available this will be reported here.

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Recovering from apt-get upgrade gone wrong due to a full disk

    - by Peter
    I was performing an apt-get upgrade on an Ubuntu 12.04.5 LTS box that hadn't been updated in a little while and the upgrade failed due to 'No space left on device'. After a little while I worked out space meant inodes and I have freed some up but unfortunately things have been left something askew. I have tried manually installing the old versions of packages mentioned using dpkg -i but that doesn't help. I have tried apt-get upgrade and apt-get -f install to no avail. Results are below. Any ideas how to fix things up? FIXED: Installing the earlier versions again manually via dpkg -i and then apt-get -f install has done the trick. Not sure why this didn't work the first time. The packages in question are listed below but they will presumably vary. libssl1.0.0_1.0.1-4ubuntu5.14_i386.deb linux-headers-3.2.0-64-generic-pae_3.2.0-64.97_i386.deb linux-image-generic-pae_3.2.0.64.76_i386.deb linux-headers-3.2.0-64_3.2.0-64.97_all.deb linux-headers-generic-pae_3.2.0.64.76_i386.deb root@unlinked:/tmp# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. libssl-dev : Depends: libssl1.0.0 (= 1.0.1-4ubuntu5.14) but 1.0.1-4ubuntu5.17 is installed linux-generic-pae : Depends: linux-image-generic-pae (= 3.2.0.64.76) but 3.2.0.67.79 is installed Depends: linux-headers-generic-pae (= 3.2.0.64.76) but 3.2.0.67.79 is installed E: Unmet dependencies. Try using -f. root@unlinked:/tmp# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-43-generic-pae linux-headers-3.2.0-38-generic-pae linux-headers-3.2.0-41-generic-pae linux-headers-3.2.0-36-generic-pae linux-headers-3.2.0-63-generic-pae linux-headers-3.2.0-58-generic-pae linux-headers-3.2.0-60-generic-pae linux-headers-3.2.0-55-generic-pae linux-headers-3.2.0-40 linux-headers-3.2.0-41 linux-headers-3.2.0-36 linux-headers-3.2.0-37 linux-headers-3.2.0-43 linux-headers-3.2.0-38 linux-headers-3.2.0-44 linux-headers-3.2.0-39 linux-headers-3.2.0-45 linux-headers-3.2.0-51 linux-headers-3.2.0-52 linux-headers-3.2.0-53 linux-headers-3.2.0-48 linux-headers-3.2.0-54 linux-headers-3.2.0-60 linux-headers-3.2.0-55 linux-headers-3.2.0-61 linux-headers-3.2.0-56 linux-headers-3.2.0-57 linux-headers-3.2.0-63 linux-headers-3.2.0-58 linux-headers-3.2.0-59 linux-headers-3.2.0-52-generic-pae linux-headers-3.2.0-44-generic-pae linux-headers-3.2.0-39-generic-pae linux-headers-3.2.0-37-generic-pae linux-headers-3.2.0-59-generic-pae linux-headers-3.2.0-61-generic-pae linux-headers-3.2.0-56-generic-pae linux-headers-3.2.0-53-generic-pae linux-headers-3.2.0-48-generic-pae linux-headers-3.2.0-45-generic-pae linux-headers-3.2.0-40-generic-pae linux-headers-3.2.0-57-generic-pae linux-headers-3.2.0-54-generic-pae linux-headers-3.2.0-51-generic-pae Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libssl-dev linux-generic-pae The following packages will be upgraded: libssl-dev linux-generic-pae 2 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade. 2 not fully installed or removed. Need to get 0 B/1,427 kB of archives. After this operation, 1,024 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of libssl-dev: libssl-dev depends on libssl1.0.0 (= 1.0.1-4ubuntu5.14); however: Version of libssl1.0.0 on system is 1.0.1-4ubuntu5.17. dpkg: error processing libssl-dev (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.64.76); however: Version of linux-image-generic-pae on system is 3.2.0.67.79. linux-generic-pae depends on linux-headers-generic-pae (= 3.2.0.64.76); however: Version of linux-headers-generic-pae on system is 3.2.0.67.79. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: libssl-dev linux-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • C# 4.0: Covariance And Contravariance In Generics

    - by Paulo Morgado
    C# 4.0 (and .NET 4.0) introduced covariance and contravariance to generic interfaces and delegates. But what is this variance thing? According to Wikipedia, in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometrical or physical entities changes when passing from one coordinate system to another.(*) But what does this have to do with C# or .NET? In type theory, a the type T is greater (>) than type S if S is a subtype (derives from) T, which means that there is a quantitative description for types in a type hierarchy. So, how does covariance and contravariance apply to C# (and .NET) generic types? In C# (and .NET), variance applies to generic type parameters and not to the resulting generic type. A generic type parameter is: covariant if the ordering of the generic types follows the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. contravariant if the ordering of the generic types is reversed from the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. invariant if neither of the above apply. If this definition is applied to arrays, we can see that arrays have always been covariant because this is valid code: object[] objectArray = new string[] { "string 1", "string 2" }; objectArray[0] = "string 3"; objectArray[1] = new object(); However, when we try to run this code, the second assignment will throw an ArrayTypeMismatchException. Although the compiler was fooled into thinking this was valid code because an object is being assigned to an element of an array of object, at run time, there is always a type check to guarantee that the runtime type of the definition of the elements of the array is greater or equal to the instance being assigned to the element. In the above example, because the runtime type of the array is array of string, the first assignment of array elements is valid because string = string and the second is invalid because string = object. This leads to the conclusion that, although arrays have always been covariant, they are not safely covariant – code that compiles is not guaranteed to run without errors. In C#, the way to define that a generic type parameter as covariant is using the out generic modifier: public interface IEnumerable<out T> { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> { T Current { get; } bool MoveNext(); } Notice the convenient use the pre-existing out keyword. Besides the benefit of not having to remember a new hypothetic covariant keyword, out is easier to remember because it defines that the generic type parameter can only appear in output positions — read-only properties and method return values. In a similar way, the way to define a type parameter as contravariant is using the in generic modifier: public interface IComparer<in T> { int Compare(T x, T y); } Once again, the use of the pre-existing in keyword makes it easier to remember that the generic type parameter can only be used in input positions — write-only properties and method non ref and non out parameters. Because covariance and contravariance apply only to the generic type parameters, a generic type definition can have both covariant and contravariant generic type parameters in its definition: public delegate TResult Func<in T, out TResult>(T arg); A generic type parameter that is not marked covariant (out) or contravariant (in) is invariant. All the types in the .NET Framework where variance could be applied to its generic type parameters have been modified to take advantage of this new feature. In summary, the rules for variance in C# (and .NET) are: Variance in type parameters are restricted to generic interface and generic delegate types. A generic interface or generic delegate type can have both covariant and contravariant type parameters. Variance applies only to reference types; if you specify a value type for a variant type parameter, that type parameter is invariant for the resulting constructed type. Variance does not apply to delegate combination. That is, given two delegates of types Action<Derived> and Action<Base>, you cannot combine the second delegate with the first although the result would be type safe. Variance allows the second delegate to be assigned to a variable of type Action<Derived>, but delegates can combine only if their types match exactly. If you want to learn more about variance in C# (and .NET), you can always read: Covariance and Contravariance in Generics — MSDN Library Exact rules for variance validity — Eric Lippert Events get a little overhaul in C# 4, Afterward: Effective Events — Chris Burrows Note: Because variance is a feature of .NET 4.0 and not only of C# 4.0, all this also applies to Visual Basic 10.

    Read the article

  • OSB/OSR/OER in One Domain - QName violates loader constraints

    - by John Graves
    For demos, testing and prototyping, I wanted a single domain which contained three servers:OSB - Oracle Service BusOSR - Oracle Service RegistryOER - Oracle Enterprise Repository These three can work together to help with service governance in an enterprise.  When building out the domain, I found errors in the OSR server due to some conflicting classes from the OSB.  This wouldn't be an issue if each server was given a unique classpath setting with the node manager, but I was having the node manager use the standard startup scripts. The domain's bin/setDomainEnv.sh script has a large set of extra libraries added for OSB which look like this: if [ "${POST_CLASSPATH}" != "" ] ; then POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar${CLASSPATHSEP}${POST_CLASSPATH}" export POST_CLASSPATH else POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar" export POST_CLASSPATH fi if [ "${PRE_CLASSPATH}" != "" ] ; then PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar${CLASSPATHSEP}${PRE_CLASSPATH}" export PRE_CLASSPATH else PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar" export PRE_CLASSPATH fi POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}/oracle/fmwhome/Oracle_OSB1/soa/modules/oracle.soa.common.adapters_11.1.1/oracle.soa.common.adapters.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/version.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/alsb.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-ant.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-common.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-core.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-dameon.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/classes${CLASSPATHSEP}\ ${ALSB_HOME}/lib/external/log4j_1.2.8.jar${CLASSPATHSEP}\ ${DOMAIN_HOME}/config/osb" I didn't take the time to sort out exactly which jar was causing the problem, but I simply surrounded this block with a conditional statement: if [ "${SERVER_NAME}" == "osr_server1" ] ; then POST_CLASSPATH=""else if [ "${POST_CLASSPATH}" != "" ] ; then POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar${CLASSPATHSEP}${POST_CLASSPATH}" export POST_CLASSPATH else POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar" export POST_CLASSPATH fi if [ "${PRE_CLASSPATH}" != "" ] ; then PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar${CLASSPATHSEP}${PRE_CLASSPATH}" export PRE_CLASSPATH else PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar" export PRE_CLASSPATH fi POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}/oracle/fmwhome/Oracle_OSB1/soa/modules/oracle.soa.common.adapters_11.1.1/oracle.soa.common.adapters.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/version.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/alsb.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-ant.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-common.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-core.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-dameon.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/classes${CLASSPATHSEP}\ ${ALSB_HOME}/lib/external/log4j_1.2.8.jar${CLASSPATHSEP}\ ${DOMAIN_HOME}/config/osb" fi I could have also just done an if [ ${SERVER_NAME} = "osb_server1" ], but I would have also had to include the AdminServer because they are needed there too.  Since the oer_server1 didn't mind, I did the negative case as shown above. To help others find this post, I'm including the error that was reported in the OSR server before I made this change. ####<Mar 30, 2012 4:20:28 PM EST> <Error> <HTTP> <localhost.localdomain> <osr_server1> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:30e96542:13662023753:-8000-000000000000001c> <1333084828916> <BEA-101017> <[ServletContext@470316600[app:registry module:registry.war path:/registry spec-version:null]] Root cause of ServletException. java.lang.LinkageError: Class javax/xml/namespace/QName violates loader constraints at com.idoox.wsdl.extensions.PopulatedExtensionRegistry.<init>(PopulatedExtensionRegistry.java:84) at com.idoox.wsdl.factory.WSDLFactoryImpl.newDefinition(WSDLFactoryImpl.java:61) at com.idoox.wsdl.xml.WSDLReaderImpl.parseDefinitions(WSDLReaderImpl.java:419) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:309) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:272) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:198) at com.idoox.wsdl.util.WSDLUtil.readWSDL(WSDLUtil.java:126) at com.systinet.wasp.admin.PackageRepositoryImpl.validateServicesNamespaceAndName(PackageRepositoryImpl.java:885) at com.systinet.wasp.admin.PackageRepositoryImpl.registerPackage(PackageRepositoryImpl.java:807) at com.systinet.wasp.admin.PackageRepositoryImpl.updateDir(PackageRepositoryImpl.java:611) at com.systinet.wasp.admin.PackageRepositoryImpl.updateDir(PackageRepositoryImpl.java:643) at com.systinet.wasp.admin.PackageRepositoryImpl.update(PackageRepositoryImpl.java:553) at com.systinet.wasp.admin.PackageRepositoryImpl.init(PackageRepositoryImpl.java:242) at com.idoox.wasp.ModuleRepository.loadModules(ModuleRepository.java:198) at com.systinet.wasp.WaspImpl.boot(WaspImpl.java:383) at org.systinet.wasp.Wasp.init(Wasp.java:151) at com.systinet.transport.servlet.server.Servlet.init(Unknown Source) at weblogic.servlet.internal.StubSecurityHelper$ServletInitAction.run(StubSecurityHelper.java:283) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.StubSecurityHelper.createServlet(StubSecurityHelper.java:64) at weblogic.servlet.internal.StubLifecycleHelper.createOneInstance(StubLifecycleHelper.java:58) at weblogic.servlet.internal.StubLifecycleHelper.<init>(StubLifecycleHelper.java:48) at weblogic.servlet.internal.ServletStubImpl.prepareServlet(ServletStubImpl.java:539) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:244) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:184) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3732) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3696) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2273) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1490) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256) at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 1

    - by MarkPearl
    So I have been delving into F# more and more and thought the best way to learn the language is to write something useful. I have been meaning to get some more Silverlight knowledge (up to now I have mainly been doing WPF) so I came up with a really simple project that I can actually use at work. Simply put – I often get support calls from clients wanting new activation codes. One of our main app’s was written in VB6 and had its own “security” where it would require about a 45 character sequence for it to be activated. The catch being that each time you reopen the program it would require a different character sequence, which meant that when we activate clients systems we have to do it live! This involves us either referring them to a website, or reading the characters to them over the phone and since nobody in the office knows the IROA off by heart we would come up with some interesting words to represent characters… 9 times out of 10 the client would type in the wrong character and we would have to start all over again… with this app I am hoping to reduce the errors of reading characters over the phone by treating it like a ham radio. My “Silverlight” application will allow for the user to input a series of characters and the system will then generate the equivalent IROA words… very basic stuff e.g. Character Input – abc Words Generated – Alpha Bravo Charlie After listening to Anders Hejlsberg on Dot Net Rocks Show 541 he mentioned that he felt many applications could make use of F# but in an almost silo basis – meaning that you would write modules that leant themselves to Functional Programming in F# and then incorporate it into a solution where the front end may be in C# or where you would have some other sort of glue. I buy into this kind of approach, so in this project I will use F# to do my very intensive “Business Logic” and will use Silverlight/C# to do the front end. F# Business Layer I am no expert at this, so I am sure to get some feedback on way I could improve my algorithm. My approach was really simple. I would need a function that would convert a single character to a string – i.e. ‘A’ –> “Alpha” and then I would need a function that would take a string of characters, convert them into a sequence of characters, and then apply my converter to return a sequence of words… make sense? Lets start with the CharToString function let CharToString (element:char) = match element.ToString().ToLower() with | "1" -> "1" | "5" -> "5" | "9" -> "9" | "2" -> "2" | "6" -> "6" | "0" -> "0" | "3" -> "3" | "7" -> "7" | "4" -> "4" | "8" -> "8" | "a" -> "Alpha" | "b" -> "Bravo" | "c" -> "Charlie" | "d" -> "Delta" | "e" -> "Echo" | "f" -> "Foxtrot" | "g" -> "Golf" | "h" -> "Hotel" | "i" -> "India" | "j" -> "Juliet" | "k" -> "Kilo" | "l" -> "Lima" | "m" -> "Mike" | "n" -> "November" | "o" -> "Oscar" | "p" -> "Papa" | "q" -> "Quebec" | "r" -> "Romeo" | "s" -> "Sierra" | "t" -> "Tango" | "u" -> "Uniform" | "v" -> "Victor" | "w" -> "Whiskey" | "x" -> "XRay" | "y" -> "Yankee" | "z" -> "Zulu" | element -> "Unknown" Quite simple, an element is passed in, this element is them converted to a lowercase single character string and then matched up with the equivalent word. If by some chance a character is not recognized, “Unknown” will be returned… I know need a function that can take a string and can parse each character of the string and generate a new sequence with the converted words… let ConvertCharsToStrings (s:string) = s |> Seq.toArray |> Seq.map(fun elem -> CharToString(elem)) Here… the Seq.toArray converts the string to a sequence of characters. I then searched for some way to parse through every element in the sequence. Originally I tried Seq.iter, but I think my understanding of what iter does was incorrect. Eventually I found Seq.map, which applies a function to every element in a sequence and then creates a new collection with the adjusted processed element. It turned out to be exactly what I needed… To test that everything worked I created one more function that parsed through every element in a sequence and printed it. AT this point I realized the the Seq.iter would be ideal for this… So my testing code is below… let PrintStrings items = items |> Seq.iter(fun x -> Console.Write(x.ToString() + " ")) let newSeq = ConvertCharsToStrings("acdefg123") PrintStrings newSeq Console.ReadLine()   Pretty basic stuff I guess… I hope my approach was right? In Part 2 I will look into doing a simple Silverlight Frontend, referencing the projects together and deploying….

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • HTML5 and Visual Studio 2010

    - by Harish Ranganathan
    All of us work with Visual Studio (or the free Visual Web Developer Express Edition) for developing web applications targeting ASP.NET / ASP.NET MVC or Silverlight etc.,  Over the years, Visual Studio has grown to a great extent.  From being a simple limited functionality tool in VS.NET 2002 to the multi-faceted, MEF driven Visual Studio 2010, it has come a long way.  And as much as Visual Studio supports rapid web development by generating HTML mark up, it also added intellisense for some of the HTML specifications that one has otherwise monotonously type every time.  Ex.- In Visual Studio 2010, one can just type the angular bracket “<” and then the first keyword “h” or “x” for html or xhtml respectively and then press tab twice and it would render the entire markup required for XHTML or HTML 1.0/1.1 strict/transitional and the fully qualified W3C URL. The same holds good for specifying HTML type declaration.  Now, the difference between HTML and XHTML has been discussed in detail already, though, if you are interested to know, you can read it from http://www.w3schools.com/xhtml/xhtml_html.asp But, the industry trend or the buzz around is HTML5.  With browsers like IE9 Beta, Google Chrome, Firefox 4 etc., supporting HTML5 standards big time, everyone wants to start developing HTML5 based websites. VS developers (like me) often get the question around when would VS start supporting HTML5.  VS 2010 was released last year and HTML5 is still specifications under development.  Clearly, with the timelines we started developing Visual Studio (way back in 2008), HTML5 specs were almost non-existent.  Even today, the HTML5 body recommends not to fully depend on the entire mark up set as they are still under development specs and might change in the future. However, with Visual Studio 2010 SP1 beta, there is quite a bit of support for HTML5 based web development.  In fact, one of my colleagues pointed out that SP1 beta’s major enhancement is its ability to support HTML5 tags and even add server mode to them. Lets look at the existing validation schema available in Visual Studio (Tools – Options – Validation) This is before installing Visual Studio 2010 SP1 Beta.  Clearly, the validation options are restricted to HTML 4.01 and XHTML 1.1 transitional and below. Also, lets consider using some of the new HTML5 input type elements.  (I found out this, just today from my friend, also an, ASP.NET team member) <input type=”email”> is one of the new input type elements according to the HTML5 specification.  Now, this works well if you type it as is  in Visual Studio and the page renders without any issue (since the default behaviour is, if there is an “undefined” type specified to input tag, it would fall back on the default mode, which is text. The moment you add <input type=”email” runat=”server” >, you get an error Naturally you don’t get intellisense support as well for these new tags.  Once you install Visual Studio 2010 Service Pack 1 Beta from here (it takes a while so you need to be patient for the installation to complete), you will start getting additional Validation templates for HTML5, as below:- Once you set this, you can start using HTML5 elements in your web page without getting errors/warnings.  Look at the screen shot below, for the new “video” tag which is showing up in intellisense (video is a part of the new HTML5 specifications)     note that, you still need to hook up the <!DOCTYPE html /> on the top manually as it doesn’t change automatically  (from the default XHTML 1.0 strict) when you create a new page. Also, the new input type tags in HTML5 are also supported One, can also use the <asp:TextBox type=”email” which would in turn generate the <input type=”email”> markup when the page is rendered.  In fact, as of SP1 beta, this is the only way to put the new input type tags with the runat=”server” attribute (otherwise you will get the parser error mentioned above.  This issue would be fixed by the final release of SP1 beta) Going further, there may be more support for having server tags for some of the common HTML5 elements, but this is work in progress currently. So, other than not having runat=”server” support for the new HTML5  input tags, you can pretty much build and target HTML5 websites with Visual Studio 2010 SP1 Beta, today.  For those who are running Visual Studio 2008, you also have the “HTML5 intellisense for Visual Studio 2010 and 2008” available for download, from http://visualstudiogallery.msdn.microsoft.com/d771cbc8-d60a-40b0-a1d8-f19fc393127d/ Note that, if you are running Visual Studio 2010, the recommended approach is to install the SP1 beta which would be the way forward for HTML5 support in Visual Studio. Of course, you need to test these on a browser supporting HTML5 such as IE9 Beta or Chrome or FireFox 4.  You can download IE9 Beta from here You can also follow the Visual Web Developer Team Blog for more updates on the stuff they are building. Cheers !!!

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • Who broke the build?

    - by Martin Hinshelwood
    I recently sent round a list of broken builds at SSW and asked for them to be fixed or deleted if they are not being used. My colleague Peter came back with a couple of questions which I love as it tells me that at least one person reads my email I think first we need to answer a couple of other questions related to builds in general.   Why do we want the build to pass? Any developer can pick up a project and build it Standards can be enforced Constant quality is maintained Problems in code are identified early What could a failed build signify? Developers have not built and tested their code properly before checking in. Something added depends on a local resource that is not under version control or does not exist on the target computer. Developers are not writing tests to cover common problems. There are not enough tests to cover problems. Now we know why, lets answer Peters questions: Where is this list? (can we see it somehow) You can normally only see the builds listed for each project. But, you have a little application called “Build Notifications” on your computer. It is installed when you install Visual Studio 2010. Figure: Staring the build notification application on Windows 7. Once you have it open (it may disappear into your system tray) you should click “Options” and select all the projects you are involved in. This application only lists projects that have builds, so don’t worry if it is not listed. This just means you are about to setup a build, right? I just selected ALL projects that have builds. Figure: All builds are listed here In addition to seeing the list you will also get toast notification of build failure’s. How can we get more info on what broke the build? (who is interesting too, to point the finger but more important is what) The only thing worse than breaking the build, is continuing to develop on a broken build! Figure: I have highlighted the users who either are bad for braking the build, or very bad for not fixing it. To find out what is wrong with a build you need to open the build definition. You can open a web version by double clicking the build in the image above, or you can open it from “Team Explorer”. Just connect to your project and open out the “Builds” tree. Then Open the build by double clicking on it. Figure: Opening a build is easy, but double click it and then open a build run from the list. Figure: Good example, the build and tests have passed Figure: Bad example, there are 133 errors preventing POK from being built on the build server. For identifying failures see: Solution: Getting Silverlight to build on Team Build 2010 RC Solution: Testing Web Services with MSTest on Team Build Finding the problem on a partially succeeded build So, Peter asked about blame, let’s have a look and see: Figure: The build has been broken for so long I have no idea when it was broken, but everyone on this list is to blame (I am there too) The rest of the history is lost in the sands of time, there is no way to tell when the build was originally broken, or by whom, or even if it ever worked in the first place. Build should be protected by the team that uses them and the only way to do that is to have them own them. It is fine for me to go in and setup a build, but the ownership for a build should always reside with the person who broke it last. Conclusion This is an example of a pointless build. Lets be honest, if you have a system like TFS in place and builds are constantly left broken, or not added to projects then your developers don’t yet understand the value. I have found that adding a Gated Check-in helps instil that understanding of value. If you prevent them from checking in without passing that basic quality gate of “your code builds on another computer” then it makes them look more closely at why they can’t check-in. I have had builds fail because one developer had a “d” drive, but the build server did not. That is what they are there to catch.   If you want to know what builds to create and why I wrote a post on “Do you know the minimum builds to create on any branch?”   Technorati Tags: TFS2010,Gated Check-in,Builds,Build Failure,Broken Build

    Read the article

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

< Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >