Search Results

Search found 1450 results on 58 pages for 'fly trap'.

Page 51/58 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Stir Trek: Iron Man Edition Recap and Photos

    - by Brian Jackett
    If you’ve noticed my blogging activity has reduced in frequency and technical content lately it’s primarily due to all of the conferences I’ve been attending, speaking at, or planning in the past few months.  This past Friday myself and six other dedicated individuals put on Stir Trek: Iron Man Edition as the culmination of a few months of hard work.  For those unfamiliar, Stir Trek is a web developer conference that was founded last year as an event to showcase content from Microsoft’s MIX conference and end the day with a private showing of the then just-released Star Trek movie.  This year’s conference expanded from 2 to 4 content tracks and upped the number of tickets from 350 to 600.  Even more amazing was the fact that we had 592 people show up day of the event for the lowest drop-off percentage of any conference I’ve been to before.   Nerd Dinner and Swag Bags     The night before Stir Trek: Iron Man Edition we hosted a nerd dinner at the Polaris Shopping mall food court with about 30 in attendance.  Nerd dinners are a great time to meet others passionate about technology and socialize before the whirlwind of the conference hits.  After the nerd dinner 20+ volunteers headed to the conference location and helped us stuff swag bags.  This in and of itself was a monumental task of putting together 600 swag bags with numerous leaflets, sponsor items, and t-shirts.  A big thanks goes out to all who assisted us that night so that we could finish in just under 2 hours instead of taking all night.  My sleep schedule also thanks you. Morning of Stir Trek     After getting a decent amount of sleep I arrived at Marcus Crosswoods theater at 6am to begin setting up for the day.  Myself and Jody Morgan were in charge of registration so we got tables set up, laid out swag bags, and organized our volunteer crew to assist with checking-in attendees.  Despite having 600+ people registration went fairly smoothly and got the day off to a great start.  I especially appreciated the 3+ cups of coffee from Crimson Cup, a local coffee shop.  For any of you that know me you’ll know that I rarely drink coffee except a few times a year when I really need the energy, so that says a lot about how good their coffee is.   Conference Starts     Once registration was completed the day kicked off with Molly Holzschlag keynoting.  Unfortunately Molly suffered from an ear infection and wasn’t able to fly so she had a virtual keynote and a session later in the day.  I was working behind the scenes on various tasks so I was only able to drop in very briefly on the keynote and rest of the morning sessions.  Throughout the day I tried to grab at least 1 or 2 pics of each presenter.  See my album below for the full set of pics.      For lunch we ordered around 150 pizzas from Mellow Mushroom, a local pizza place (notice the theme of supporting local businesses.)  Early on we were concerned about Mellow Mushroom being able to supply that many pizzas and get them delivered (still hot) to the theater, but they did an excellent job day of the event.  I wish I had gotten some pictures of the old school VW van they delivered the pizza in, but I was just a bit busy running around trying to get theaters ready for lunch.  We had attendees from last year who specifically requested that we have Mellow Mushroom supply lunch this year and I’m glad everything worked out being able to use them again.     During the afternoon I was able to attend a few sessions and hear some great content from various speakers.  It was also nice to just sit down and get off my feet for a bit.  After the last sessions the day concluded with a raffle.  There were a few logistical and technical issues that hampered our ability to smoothly conduct the raffle.  To those of you that agree the raffle wasn’t the smoothest experience I would like to say that the Stir Trek planning committee has already begun meeting to discuss ways of improving the conference for next year.  We are also accepting feedback (both positive and negative) at the following link: click here.  If you don’t wish to use the Joind In site you can also email me directly and I’ll be sure to pass along the feedback.   Iron Man 2 Movie     Last but not least, what Stir Trek event would be complete without the feature movie.  This year’s movie was Iron Man 2.  The theater had some really cool props and promotions (see pic below) for the movie.  I really enjoyed Iron Man 2, but I would recommend brushing up on the Iron Man comics and Marvel’s plans for future movies to understand some of the plot elements that come up.  Also make sure you stay through to the end of the movie credits to see a sneak peak of something special, that’s all I’ll say. Conclusion     Again a big thanks goes out to all of the speakers, sponsors, attendees, movie theater staff, volunteers, and everyone else involved in making this event great.  Also big thanks to my fellow Stir Trek planning committee members: Jeff Blankenburg, Matt Casto, Carey Payette, Jody Morgan, Rick Kierner, and Sarah Dutkiewitcz.  I am grateful for everything I learned while helping plan this event and look forward to being involved again next year.  For those interested we are currently targeting Thor as our movie theme for 2011 and then The Avengers for 2012.  These are tentative based on release dates that could shift as we get closer, but for now look solid.   Photos Pics on Facebook (includes tagging)     Stir Trek: Iron Man Edition photos on Facebook Pics on Live site (higher res)      View Full Album         -Frog Out

    Read the article

  • Switching the layout in Orchard CMS

    - by Bertrand Le Roy
    The UI composition in Orchard is extremely flexible, thanks in no small part to the usage of dynamic Clay shapes. Every notable UI construct in Orchard is built as a shape that other parts of the system can then party on and modify any way they want. Case in point today: modifying the layout (which is a shape) on the fly to provide custom page structures for different parts of the site. This might actually end up being built-in Orchard 1.0 but for the moment it’s not in there. Plus, it’s quite interesting to see how it’s done. We are going to build a little extension that allows for specialized layouts in addition to the default layout.cshtml that Orchard understands out of the box. The extension will add the possibility to add the module name (or, in MVC terms, area name) to the template name, or module and controller names, or module, controller and action names. For example, the home page is served by the HomePage module, so with this extension you’ll be able to add an optional layout-homepage.cshtml file to your theme to specialize the look of the home page while leaving all other pages using the regular layout.cshtml. I decided to implement this sample as a theme with code. This way, the new overrides are only enabled as the theme is activated, which makes a lot of sense as this is going to be where you’ll be creating those additional layouts. The first thing I did was to create my own theme, derived from the default TheThemeMachine with this command: codegen theme CustomLayoutMachine /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine .csharpcode, .csharpcode pre { font-size: 12px; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Once that was done, I worked around a known bug and moved the new project from the Modules solution folder into Themes (the code was already physically in the right place, this is just about Visual Studio editing). The CreateProject flag in the command-line created a project file for us in the theme’s folder. This is only necessary if you want to run code outside of views from that theme. The code that we want to add is the following LayoutFilter.cs: using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace CustomLayoutMachine.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { var workContext = _wca.GetContext(); var routeValues = filterContext.RouteData.Values; workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller", "action")); } public void OnResultExecuted(ResultExecutedContext filterContext) { } private static string BuildShapeName( RouteValueDictionary values, params string[] names) { return "Layout__" + string.Join("__", names.Select(s => ((string)values[s] ?? "").Replace(".", "_"))); } } } This filter is intercepting ResultExecuting, which is going to provide a context object out of which we can extract the route data. We are also injecting an IWorkContextAccessor dependency that will give us access to the current Layout object, so that we can add alternate shape names to its metadata. We are adding three possible shape names to the default, with different combinations of area, controller and action names. For example, a request to a blog post is going to be routed to the “Orchard.Blogs” module’s “BlogPost” controller’s “Item” action. Our filters will then add the following shape names to the default “Layout”: Layout__Orchard_Blogs Layout__Orchard_Blogs__BlogPost Layout__Orchard_Blogs__BlogPost__Item Those template names get mapped into the following file names by the system (assuming the Razor view engine): Layout-Orchard_Blogs.cshtml Layout-Orchard_Blogs-BlogPost.cshtml Layout-Orchard_Blogs-BlogPost-Item.cshtml This works for any module/controller/action of course, but in the sample I created Layout-HomePage.cshtml (a specific layout for the home page), Layout-Orchard_Blogs.cshtml (a layout for all the blog views) and Layout-Orchard_Blogs-BlogPost-Item.cshtml (a layout that is specific to blog posts). Of course, this is just an example, and this kind of dynamic extension of shapes that you didn’t even create in the first place is highly encouraged in Orchard. You don’t have to do it from a filter, we only did it this way because that was a good place where we could get the context that we needed. And of course, you can base your alternate shape names on something completely different from route values if you want. For example, you might want to create your own part that modifies the layout for a specific content item, or you might want to do it based on the raw URL (like it’s done in widget rules) or who knows what crazy custom rule. The point of all this is to show that extending or modifying shapes is easy, and the layout just happens to be a shape. In other words, you can do whatever you want. Ain’t that nice? The custom theme can be found here: Orchard.Theme.CustomLayoutMachine.1.0.nupkg Many thanks to Louis, who showed me how to do this.

    Read the article

  • SQLAuthority News – #SQLPASS 2012 Seattle Update – Memorylane 2009, 2010, 2011

    - by pinaldave
    Today is the first day of the SQLPASS 2012 and I will be soon posting SQL Server 2012 experience over here. Today when I landed in Seattle, I got the nostalgia feeling. I used to stay in the USA. I stayed here for more than 7 years – I studied here and I worked in USA. I had lots of friends in Seattle when I used to stay in the USA. I always wanted to visit Seattle because it is THE place. I remember once I purchased a ticket to travel to Seattle through Priceline (well it was the cheapest option and I was a student) but could not fly because of an interesting issue. I used to be Teaching Assistant of an advanced course and the professor asked me to build a pop-quiz for the course. I unfortunately had to cancel the trip. Before I returned to India – I pretty much covered every city existed in my list to must visit, except one – Seattle. It was so interesting that I never made it to Seattle even though I wanted to visit, when I was in USA. After that one time I never got a chance to travel to Seattle. After a few years I also returned to India for good. Once on Television I saw “Sleepless in Seattle” movie playing and I immediately changed the channel as it reminded me that I never made it to Seattle before. However, destiny has its own way to handle decisions. After I returned to India – I visited Seattle total of 5 times and this is my 6th visit to Seattle in less than 3 years. I was here for 3 previous SQLPASS events – 2009, 2010, and 2011 as well two Microsoft Most Valuable Professional Summit in 2009 and 2010. During these five trips I tried to catch up with all of my all friends but I realize that time has its own way of doing things. Many moved out of Seattle and many were too busy revive the old friendship but there were few who always make a point to meet me when I travel to the city. During the course of my visits I have made few fantastic new friends – Rick Morelan (Joes 2 Pros) and Greg Lynch. Every time I meet them I feel that I know them for years. I think city of Seattle has played very important part in our relationship that I got these fantastic friends. SQLPASS is the event where I find all of my SQL Friends and I look for this event for an entire year. This year’s my goal is to meet as many as new friends I can meet. If you are going to be at SQLPASS – FIND ME. I want to have a photo with you. I want to remember each name as I believe this is very important part of our life – making new friends and sustaining new friendship. Here are few of the pointers where you can find me. All Keynotes – Blogger’s Table Exhibition Booth Joes 2 Pros Booth #117 – Do not forget to stop by at the booth – I might have goodies for you – limited editions. Book Signing Events – Check details in tomorrow’s blog or stop by Booth #117 Evening Parties 6th Nov – Welcome Reception Evening Parties 7th Nov - Exhibitor Reception – Do not miss Booth #117 Evening Parties 8th Nov - Community Appreciation Party Additionally at few other locations – Embarcadero Booth In Coffee shops in Convention Center If you are SQLPASS – make sure that I find an opportunity to meet you at the event. Reserve a little time and lets have a coffee together. I will be continuously tweeting about my where about on twitter so let us stay connected on twitter. Here is my experience of my earlier experience of attending SQLPASS. SQLAuthority News – Book Signing Event – SQLPASS 2011 Event Log SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log SQLAuthority News – Story of Seattle – SQLPASS 2011 Event Log SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Notes of Excellent Experience at SQL PASS 2009 Summit, Seattle Let us meet! Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • Deep in the Heart of Texas

    - by Applications User Experience
    Author: Erika Webb, Manager, Fusion Applications UX User Assistance When I was first working in the usability field, the only way I could consider conducting a usability study was to bring a potential user to a lab environment where I could show them whatever I was interested in learning more about and ask them questions. While I hate to reveal just how long I have been working in this field, let's just say that pads of paper and a stopwatch were key tools for any test I conducted. Over the years, I have worked in simple labs with basic video taping equipment and not much else, and I have worked in corporate environments with sophisticated usability labs and state-of-the-art equipment. Years ago, we conducted all usability studies at the location of the user. If we wanted to see if there were any differences between users in New York, Chicago, and Los Angeles, we went to those places to run the test. A lab environment is very useful for many test situations. However, there has always been a debate in the usability field about whether bringing someone into a lab environment, however friendly we make it, somehow intrinsically changes the behavior of the user as compared to having them work in their own environment, at their own desk, and on their own computer. We developed systems to create a portable usability lab, so that we could go to the users that we needed to test.  Do lab environments change user behavior patterns? Then 9/11 hit. You may not remember, but no planes flew for weeks afterwards. Companies all over the world couldn't fly-in employees for meetings. Suddenly, traveling to the location of the users had an additional difficulty. The company I was working for at the time had usability specialists stuck in New York for days before they could finally rent a car and drive home to Colorado. This changed the world pretty suddenly, and technology jumped on the change. Companies offering Internet meeting tools were strugglinguntil no one could travel. The Internet boomed with collaboration tools that enabled people to work together wherever they happened to be. This change in technology has made a huge difference in my world. We use collaborative tools to bring our product concepts and ideas to the user across the Internet. As a global company, we benefit from having users from all over the world inform our designs. We now run usability studies with users all over the world in a single day, a feat we couldn't have accomplished 10 years ago by plane! Other technology companies have started to do more of this type of usability testing, since the tools have improved so dramatically. Plus, in our busy world, it's not always easy to find users who can take the time away from their jobs to come to our labs. reaching users where it is convenient for them greatly improves the odds that people do participate. I manage a team of usability specialists who live in India and California, whlie I live in Colorado. We have wonderful labs that we bring users into to show them our products. But very often, we run our studies remotely. We used to take the lab to the users now we use the labs, but we let the users stay where they are. We gain users who might not have been able to leave work to come to our labs, and they get to use the system they are familiar with. And we gain users nearly anywhere that we can set up an Internet connection, as long as the users have a phone, a broadband connection, and a compatible Web browser (with no pop-up blockers). After we recruit participants in a traditional manner, we send them an invitation to participate through the use of a telephone conference call and Web conferencing tool. At Oracle, we use Oracle Web Conference part of Oracle Collaboration Suite, which enables us to give the user control of the mouse, while we present a prototype or wireframe pictures. We can record the sessions over the Web and phone conference. We send the users instructions, plus tips to ensure that we won't have problems sharing screens. In some cases, when time is tight, we even run a five-minute "test session" with users a day in advance to be sure that we can connect. Prior to the test, we send users a participant script that contains information about the study, including any questionnaires. This is exactly the same script we give to participants who come to the labs. We ask users to print this before the beginning of the session. We generally run these studies by having a usability engineer in our usability labs, so that we can record the session as though the user were in the lab with us. Roughly 80% of our application software usability testing at Oracle is performed using remote methods. The probability of getting a   remote test participant decreases the higher up the person is in the target organization. We have a methodology checklist available to help our usability engineers work through the remote processes.

    Read the article

  • Windows Phone 7 Review &ndash; Part 1: LG Quantum

    - by Nikita Polyakov
    As many of my fellow geeks, I ran out and got a retail windows Phone 7 on the first day. Just had to have it :) I’ve had the developer prototypes in my hands for previous 3 months on and off, so I finally wanted to have one I call my own. I’ve rushed the Launch   I’ve checked out both AT&T and T-Mobile offerings on day 1 and decided on a Samsung Focus. Great screen, super light and thin. If you don’t believe me that this phone can compete with the best of the non-Phone 7 offerings - get it in your hand to compare for yourself. I have to say that even though the on-screen keyboard on Windows Phone 7 is one of the best, the amount of text I write on my phone and my expectation of how long that takes for a short reply are very high. Also the phone being so slick and sexy did not feel solid or confident in my hand or pocket. As the dust settled   Arrives the LG Quantum – now on AT&T and worldwide. First impression of the softer plastic, the back battery cover is solid metal - the entire phone feels solid and indestructible! Phone fits just right in my hand, it’s almost too good. It does not feel like it will crack in your jeans. I feel safe holding it and don’t feel like if I or someone were to bump into me walking it’d fly out of my hand. I’ve dropped and had thrown the Focus a few times on accident as it’s weight is negligible. I won’t even dream of lying the first day adjusting to a 3.5’ LCD screen from the Samsung’s blistering bright and poppy AMOLED 4’ was hard. But the colors and sharpness are still very good. I find it almost easier on the eyes actually for day to day use.  I had a chance to lay the phone down in the line with the prototypes and final versions of other phones that had LCD screens – LG makes HTC looks like a budget LCD compared to a high end LCD in the home theatre department. I am consistently complemented by friends that have the HD7 or Surround on how much better my screen looks. The screen just looks like the most color correct phone out of the line up. Even next to Samsung it makes it look oversaturated, but can’t match the true blacks compensating with true white.   Day to Day Usability   What I also noticed that is a huge difference is how much I am not accidently hitting the soft keys at the bottom. I real pain on Focus since holding it in am average size hand already would accidently touch the controls at the bottom. QWERTY keyboard on this phone is great. It’s like the mission for LG is “make it solid!”. Keyboard has a very durable feel.   LG’s has a secret wild card though is the DLNA support. If you seen an ad for it, you should. Imagine this – playing a song from your phone straight to your network connected A/V receiver. Done. Pictures to TV. Done. Video. Done. DLNA works with components that advertise to as well as Windows 7, XBOX 360 and other consoles.  I will write an extensive review of that experience in near future. LG Exclusive apps – from panorama photo taker to voice to text translator and even look-n-type app that works like a backup inverse camera, there is quite a bit there that won’t be found on the other phones. I’ll review those in more detail in another segment. Conclusion So for a quick comparison: If you want a phone that is super thin, light and is core reference of a Windows Phone 7 – Samsung Focus it is. If you want a great phone with solid secure feel, real keyboard, media features - the hands down winner is LG Quantum.   You can pick up the LG Quantum at AT&T in US and worldwide as LG Optimus 7Q.   Final thought: I have not had SmartPhone that I felt was a reliable trusty primary communication device since Samsung BlackJack II, this time the LG got the crown.   [ Disclosure: Phone was provided to me free of charge. That has been the case for all of my phones for years, nothing new - I get them all. ]

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Drawing on a webpage – HTML5 - IE9

    - by nmarun
    So I upgraded to IE9 and continued exploring HTML5. Now there’s this ‘thing’ called Canvas in HTML5 with which you can do some cool stuff. Alright what IS this Canvas thing anyways? The Web Hypertext Application Technology Working Group says this: “The canvas element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly.” The Canvas element has two only attributes – width and height and when not specified they take up the default values of 300 and 150 respectively. Below is what my HTML file looks like: 1: <!DOCTYPE html> 2: <html lang="en-US"> 3: <head> 4: <script type="text/javascript" src="CustomScript.js"></script> 5: <script src="jquery-1.4.4.js" type="text/javascript"></script 6:  7: <title>Draw on a webpage</title> 8: </head> 9: <body> 10: <canvas id="canvas" width="500" height="500"></canvas> 11: <br /> 12: <input type="submit" id="submit" value="Clear" /> 13: <h4 id="currentPosition"> 14: 0, 0 15: </h4> 16: <div id="mousedownCoords"></div> 17: </body> 18: </html> In case you’re wondering, this is not a MVC or any kind of web application. This is plain ol’ HTML even though I’m writing all this in VS 2010. You see this is a very simple, ‘gimmicks-free’ html page. I have declared a Canvas element on line 10 and a button on line 11 to clear the drawing board. I’m using jQuery / JavaScript show the current position of the mouse on the screen. This will get updated in the ‘currentPosition’ <h4> tag and I’m using the ‘mousedownCoords’ to write all the places where the mouse was clicked. This is what my page renders as: The rectangle with a background is our canvas. The coloring is due to some javascript (which we’ll see in a moment). Now let’s get to our CustomScript.js file. 1: jQuery(document).ready(function () { 2: var isFirstClick = true; 3: var canvas = document.getElementById("canvas"); 4: // getContext: Returns an object that exposes an API for drawing on the canvas 5: var canvasContext = canvas.getContext("2d"); 6: fillBackground(); 7:  8: $("#submit").click(function () { 9: clearCanvas(); 10: fillBackground(); 11: }); 12:  13: $(document).mousemove(function (e) { 14: $('#currentPosition').html(e.pageX + ', ' + e.pageY); 15: }); 16: $(document).mouseup(function (e) { 17: // on the first click 18: // set the moveTo 19: if (isFirstClick == true) { 20: canvasContext.beginPath(); 21: canvasContext.moveTo(e.pageX - 7, e.pageY - 7); 22: isFirstClick = false; 23: } 24: else { 25: // on subsequent clicks, draw a line 26: canvasContext.lineTo(e.pageX - 7, e.pageY - 7); 27: canvasContext.stroke(); 28: } 29:  30: $('#mousedownCoords').text($('#mousedownCoords').text() + '(' + e.pageX + ',' + e.pageY + ')'); 31: }); 32:  33: function fillBackground() { 34: canvasContext.fillStyle = '#a1b1c3'; 35: canvasContext.fillRect(0, 0, 500, 500); 36: canvasContext.fill(); 37: } 38:  39: function clearCanvas() { 40: // wipe-out the canvas 41: canvas.width = canvas.width; 42: // set the isFirstClick to true 43: // so the next shape can begin 44: isFirstClick = true; 45: // clear the text 46: $('#mousedownCoords').text(''); 47: } 48: })   The script only looks long and complicated, but is not. I’ll go over the main steps. Get a ‘hold’ of your canvas object and retrieve the ‘2d’ context out of it. On mousemove event, write the current x and y coordinates to the ‘currentPosition’ element. On mouseup event, check if this is the first time the user has clicked on the canvas. The coloring of the canvas is done in the fillBackground() function. We first need to start a new path. This is done by calling the beginPath() function on our context. The moveTo() function sets the starting point of our path. The lineTo() function sets the end point of the line to be drawn. The stroke() function is the one that actually draws the line on our canvas. So if you want to play with the demo, here’s how you do it. First click on the canvas (nothing visible happens on the canvas). The second click draws a line from the first click to the current coordinates and so on and so forth. Click on the ‘Clear’ button, to reset the canvas and to give your creativity a clean slate. Here’s a sample output: Happy drawing! Verdict: HTML5 and IE9 – I think we’re on to something big and great here!

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    As more and more business is being conducted via online channels, engaging users and making them more productive and efficient though these online channels is becoming critical. These users could be customers, partners or employees and while the respective channels through which they interact might be different, these users do increasingly interact with your business through the Web, or mobile devices or now through various social mediums.  Businesses need a user engagement strategy and solution that allows them to deliver targeted and personalized content and applications to users through the various online mediums and touch points.  The customer experience today is made up of an ongoing set of interactions with organizations across many channels, online and offline.  The Direct channel (including sales reps, email and mail) is an important point of contact, as is the Contact Center.  Contact Centers rely on the phone as a means of interacting with customers, and also more now than ever, the Web as well.  However, the online organization is often managed separately from the Contact Center organization within a business. In-store is an important channel for retailers, offering Point-of-Service for human interactions, and Kiosks which enable self-service. Kiosks are a Web-enabled touch point but in-store kiosks are often managed by the head of retail operations, rather than the online organization.  And of course, the online channel, including customer interactions with an organization via digital means -- on the website, mobile websites, and social networking sites, has risen to paramount importance in recent years in the customer experience. Historically all of these channels have been managed separately. The result of all of this fragmentation is that the customer touch points with an organization are siloed.  Their interactions online are not known and respected in their dealings in-store.  Their calls to the contact center are not taken as input into what the website offers them when they arrive. Think of how many times you’ve fallen victim to this. Your experience with the company call center is different than the experience in-store. Your experience with the company website on your desktop computer is different than your experience on your iPad. I think you get the point. But the customer isn’t the only one we need to look at here, as employees and the IT organization have challenges as well when it comes to online engagement. There are many common tools and technologies that organizations have been using to try and engage users, whether it’s customers, employees or partners. Some have adopted different blog and wiki technologies (some hosted, some open source, sometimes embedded in platforms), to things like tagging, file sharing and content management, or composite applications for self-service applications and activity streams. Basically, there are so many different tools & technologies that each address different aspects of user engagement. Now, one of the challenges with this, is that if we look at each individual tool, typically just implementing for example a file sharing and basic collaboration solution, may meet the needs of the business user for one aspect of user engagement, but it may not be the best solution to engage with customers and partners, or it may not fit with IT standards such as integrating with their single sign on tools or their corporate website. Often, the scenario is that businesses are having to acquire multiple pieces and parts as well as build custom applications to meet their needs. Leaving customers and partners with a more fragmented way of interacting with the company. Every organization has some sort of enterprise balancing act between the needs of the business user and the needs and restrictions enforced by enterprise IT groups. As we’ve been discussing, we all know that the expectations for online engagement have changed since the days of the static, one-size fits all website. With these changes have come some very difficult organizational challenges as well. Today, as a business user, you want to engage with your customers, and your customers expect you to know who they are. They expect you to recall the details they’ve provided to you on your website, to your CSRs and to your sales people. They expect you to remember their purchases, their preferences and their problems. And they expect you to know who they are, equally well, across channels, including your web presence. This creates a host of challenges for today’s business users. Delivering targeted, relevant content online is now essential for converting prospects into customers and for engendering long term loyalty. Business users need the ability to leverage customer data from different sources to fuel their segmentation and targeting strategies and to easily set-up, manage and optimize online campaigns. Also critical, they need the ability to accomplish these things on-the-fly, at the speed of the marketplace, while making iterative improvements.  These changing expectations put a host of demands on the IT organization as well. The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance. And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more. So then, how do you solve these challenges and meet the growing demands of your users?  You need a solution that: Unifies every customer interaction across all channels Personalizes the products and content that interest the customer and to the device Delivers targeted promotions to the right customer Engages and improve employee productivity Provides self-service access to applications Includes embedded in-context social   So how then do you achieve this level of online engagement, complete customer experience and engage your employees? The answer: Oracle WebCenter. If you want to learn how to get there, we encourage you to attend this webcast on Thursday Drive Online Engagement with Intuitive Portals and Websites, where we'll talk about how you are able to transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels. Register today!

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • Juniper SSG-5 subinterface vlan routing to the internet

    - by catfish
    I'm unable to get a brand new Juniper SSG-5 with latest 6.3.0r05 firmware routing to the internet from a subinterface I created on bgroup0 setup as vlan2 (bgroup0.1 on "wifi" zone). When connected on the default vlan it gets on the internet just fine. When I switch to vlan2 I'm unable to get to the internet. I am able to get the correct ip address (10.150.0.0/24) from dhcp, able to get to the juniper management page, etc but nothing past the firewall, can't ping 4.2.2.2 or the internet gateway. Even setting up logging on the wifi-to-untrust policy and it does shows the attempts (it's it's timeouts). 172.31.16.0/24 is the untrusted lan, it's already nat'ed but works fine for testing. Can ping this ip from the default vlan but not from vlan2 192.168.1.0/24 is the trusted main lan 10.150.0.0/24 is the wifi isolated lan on vlan2 The idea is to setup an AP with lan and guest access (AP supports multiple ssid's on different vlans). I know I can setup the juniper to use different ports for the wifi lan and use their procurve switch to do the vlan separation, but I never used vlan'ing on a Juniper firewall and I would like to try it out this way. Here is the complete config file: unset key protection enable set clock timezone -5 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "xxxxxxxxxxxxxxxx" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone id 100 "Wifi" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst unset zone "Wifi" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Untrust" set interface "bgroup0" zone "Trust" set interface "bgroup0.1" tag 2 zone "Wifi" set interface "bgroup1" zone "DMZ" set interface bgroup0 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup0 port ethernet0/5 set interface bgroup0 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 172.31.16.243/24 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup0.1 ip 10.150.0.1/24 set interface bgroup0.1 nat set interface bgroup0.1 mtu 1500 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup0.1 ip manageable set interface ethernet0/0 manage ping set interface ethernet0/1 manage ping set interface bgroup0.1 manage ping set interface bgroup0.1 manage telnet set interface bgroup0.1 manage web unset interface bgroup1 manage ping set interface bgroup0 dhcp server service set interface bgroup0.1 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup0.1 dhcp server enable set interface bgroup0 dhcp server option gateway 192.168.1.1 set interface bgroup0 dhcp server option netmask 255.255.255.0 set interface bgroup0 dhcp server option dns1 8.8.8.8 set interface bgroup0.1 dhcp server option lease 1440 set interface bgroup0.1 dhcp server option gateway 10.150.0.1 set interface bgroup0.1 dhcp server option netmask 255.255.255.0 set interface bgroup0.1 dhcp server option dns1 8.8.8.8 set interface bgroup0 dhcp server ip 192.168.1.33 to 192.168.1.126 set interface bgroup0.1 dhcp server ip 10.150.0.50 to 10.150.0.100 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup0.1 dhcp server config next-server-ip set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow no-tcp-seq-check set flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Wifi" to "Untrust" "Any" "Any" "ANY" permit log set policy id 2 exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set snmpv3 local-engine id "0162122009006149" set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route set route 0.0.0.0/0 interface ethernet0/0 gateway 172.31.16.1 exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit

    Read the article

  • OS X 10.9 Mavericks Kernel Panics out of the box

    - by Kevin
    OS X Kernel panics after a fresh install of OS X 10.9 on a 17" Macbook Pro. Anonymous UUID: D002464D-24B7-C2B5-3D83-1C0B02873B29 Wed Oct 30 11:08:17 2013 panic(cpu 1 caller 0xffffff8006edc19e): Kernel trap at 0xffffff7f88e0a96c, type 14=page fault, registers: CR0: 0x000000008001003b, CR2: 0xffffef7f88e309b8, CR3: 0x0000000009c2d000, CR4: 0x0000000000000660 RAX: 0x0fffffd0c7b30000, RBX: 0xffffef7f88e309b0, RCX: 0x0000000000000001, RDX: 0x000002f384d06471 RSP: 0xffffff80eff03d80, RBP: 0xffffff80eff03e70, RSI: 0x0000031384cfb168, RDI: 0xffffff80e8f05148 R8: 0xffffff801b0f8670, R9: 0x0000000000000005, R10: 0x0000000000004a24, R11: 0x0000000000000202 R12: 0xffffff801938b800, R13: 0x0000000000000005, R14: 0xffffff80e8f05148, R15: 0xffffff7f88e2ee20 RFL: 0x0000000000010006, RIP: 0xffffff7f88e0a96c, CS: 0x0000000000000008, SS: 0x0000000000000010 Fault CR2: 0xffffef7f88e309b8, Error code: 0x0000000000000002, Fault CPU: 0x1 Backtrace (CPU 1), Frame : Return Address 0xffffff80eff03a10 : 0xffffff8006e22f69 0xffffff80eff03a90 : 0xffffff8006edc19e 0xffffff80eff03c60 : 0xffffff8006ef3606 0xffffff80eff03c80 : 0xffffff7f88e0a96c 0xffffff80eff03e70 : 0xffffff7f88e09b89 0xffffff80eff03f30 : 0xffffff8006edda5c 0xffffff80eff03f50 : 0xffffff8006e3757a 0xffffff80eff03f90 : 0xffffff8006e378c8 0xffffff80eff03fb0 : 0xffffff8006ed6aa7 Kernel Extensions in backtrace: com.apple.driver.AppleIntelCPUPowerManagement(216.0)[A6EE4D7B-228E-3A3C-95BA-10ED6F331236]@0xffffff7f88e07000->0xffffff7f88e31fff BSD process name corresponding to current thread: kernel_task Mac OS version: 13A603 Kernel version: Darwin Kernel Version 13.0.0: Thu Sep 19 22:22:27 PDT 2013; root:xnu-2422.1.72~6/RELEASE_X86_64 Kernel UUID: 1D9369E3-D0A5-31B6-8D16-BFFBBB390393 Kernel slide: 0x0000000006c00000 Kernel text base: 0xffffff8006e00000 System model name: MacBookPro5,2 (Mac-F2268EC8) System uptime in nanoseconds: 4634353513870 last loaded kext at 39203945245: com.viscosityvpn.Viscosity.tun 1.0 (addr 0xffffff7f89200000, size 32768) last unloaded kext at 147930318702: com.apple.driver.AppleFileSystemDriver 3.0.1 (addr 0xffffff7f89110000, size 8192) loaded kexts: com.viscosityvpn.Viscosity.tun 1.0 com.viscosityvpn.Viscosity.tap 1.0 com.apple.driver.AudioAUUC 1.60 com.apple.driver.AppleHWSensor 1.9.5d0 com.apple.filesystems.autofs 3.0 com.apple.iokit.IOBluetoothSerialManager 4.2.0f6 com.apple.driver.AGPM 100.14.11 com.apple.driver.AppleMikeyHIDDriver 124 com.apple.driver.AppleHDA 2.5.2fc2 com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport 4.2.0f6 com.apple.GeForceTesla 8.1.8 com.apple.driver.AppleMikeyDriver 2.5.2fc2 com.apple.iokit.IOUserEthernet 1.0.0d1 com.apple.driver.AppleUpstreamUserClient 3.5.13 com.apple.driver.AppleMuxControl 3.4.12 com.apple.driver.ACPI_SMC_PlatformPlugin 1.0.0 com.apple.driver.AppleSMCLMU 2.0.4d1 com.apple.Dont_Steal_Mac_OS_X 7.0.0 com.apple.driver.AppleHWAccess 1 com.apple.driver.AppleMCCSControl 1.1.12 com.apple.driver.AppleLPC 1.7.0 com.apple.driver.SMCMotionSensor 3.0.4d1 com.apple.driver.AppleUSBTCButtons 240.2 com.apple.driver.AppleUSBTCKeyboard 240.2 com.apple.driver.AppleIRController 325.7 com.apple.AppleFSCompression.AppleFSCompressionTypeDataless 1.0.0d1 com.apple.AppleFSCompression.AppleFSCompressionTypeZlib 1.0.0d1 com.apple.BootCache 35 com.apple.iokit.SCSITaskUserClient 3.6.0 com.apple.driver.XsanFilter 404 com.apple.iokit.IOAHCIBlockStorage 2.4.0 com.apple.driver.AppleUSBHub 650.4.4 com.apple.driver.AppleUSBEHCI 650.4.1 com.apple.driver.AppleFWOHCI 4.9.9 com.apple.driver.AirPort.Brcm4331 700.20.22 com.apple.driver.AppleAHCIPort 2.9.5 com.apple.nvenet 2.0.21 com.apple.driver.AppleUSBOHCI 650.4.1 com.apple.driver.AppleSmartBatteryManager 161.0.0 com.apple.driver.AppleRTC 2.0 com.apple.driver.AppleHPET 1.8 com.apple.driver.AppleACPIButtons 2.0 com.apple.driver.AppleSMBIOS 2.0 com.apple.driver.AppleACPIEC 2.0 com.apple.driver.AppleAPIC 1.7 com.apple.driver.AppleIntelCPUPowerManagementClient 216.0.0 com.apple.nke.applicationfirewall 153 com.apple.security.quarantine 3 com.apple.driver.AppleIntelCPUPowerManagement 216.0.0 com.apple.kext.triggers 1.0 com.apple.iokit.IOSerialFamily 10.0.7 com.apple.AppleGraphicsDeviceControl 3.4.12 com.apple.driver.DspFuncLib 2.5.2fc2 com.apple.vecLib.kext 1.0.0 com.apple.iokit.IOAudioFamily 1.9.4fc11 com.apple.kext.OSvKernDSPLib 1.14 com.apple.iokit.IOBluetoothHostControllerUSBTransport 4.2.0f6 com.apple.iokit.IOSurface 91 com.apple.iokit.IOBluetoothFamily 4.2.0f6 com.apple.nvidia.classic.NVDANV50HalTesla 8.1.8 com.apple.driver.AppleSMBusPCI 1.0.12d1 com.apple.driver.AppleGraphicsControl 3.4.12 com.apple.driver.IOPlatformPluginLegacy 1.0.0 com.apple.driver.AppleBacklightExpert 1.0.4 com.apple.iokit.IOFireWireIP 2.2.5 com.apple.driver.AppleHDAController 2.5.2fc2 com.apple.iokit.IOHDAFamily 2.5.2fc2 com.apple.driver.AppleSMBusController 1.0.11d1 com.apple.nvidia.classic.NVDAResmanTesla 8.1.8 com.apple.driver.IOPlatformPluginFamily 5.5.1d27 com.apple.iokit.IONDRVSupport 2.3.6 com.apple.iokit.IOGraphicsFamily 2.3.6 com.apple.driver.AppleSMC 3.1.6d1 com.apple.driver.AppleUSBMultitouch 240.6 com.apple.iokit.IOUSBHIDDriver 650.4.4 com.apple.driver.AppleUSBMergeNub 650.4.0 com.apple.driver.AppleUSBComposite 650.4.0 com.apple.driver.CoreStorage 380 com.apple.iokit.IOSCSIMultimediaCommandsDevice 3.6.0 com.apple.iokit.IOBDStorageFamily 1.7 com.apple.iokit.IODVDStorageFamily 1.7.1 com.apple.iokit.IOCDStorageFamily 1.7.1 com.apple.iokit.IOAHCISerialATAPI 2.6.0 com.apple.iokit.IOSCSIArchitectureModelFamily 3.6.0 com.apple.iokit.IOUSBUserClient 650.4.4 com.apple.iokit.IOFireWireFamily 4.5.5 com.apple.iokit.IO80211Family 600.34 com.apple.iokit.IOAHCIFamily 2.6.0 com.apple.iokit.IONetworkingFamily 3.2 com.apple.iokit.IOUSBFamily 650.4.4 com.apple.driver.NVSMU 2.2.9 com.apple.driver.AppleEFINVRAM 2.0 com.apple.driver.AppleEFIRuntime 2.0 com.apple.iokit.IOHIDFamily 2.0.0 com.apple.iokit.IOSMBusFamily 1.1 com.apple.security.sandbox 278.10 com.apple.kext.AppleMatch 1.0.0d1 com.apple.security.TMSafetyNet 7 com.apple.driver.AppleKeyStore 2 com.apple.driver.DiskImages 371.1 com.apple.iokit.IOStorageFamily 1.9 com.apple.iokit.IOReportFamily 21 com.apple.driver.AppleFDEKeyStore 28.30 com.apple.driver.AppleACPIPlatform 2.0 com.apple.iokit.IOPCIFamily 2.8 com.apple.iokit.IOACPIFamily 1.4 com.apple.kec.pthread 1 com.apple.kec.corecrypto 1.0 System Profile: Model: MacBookPro5,2, BootROM MBP52.008E.B05, 2 processors, Intel Core 2 Duo, 2.8 GHz, 8 GB, SMC 1.42f4 Graphics: NVIDIA GeForce 9400M, NVIDIA GeForce 9400M, PCI, 256 MB Graphics: NVIDIA GeForce 9600M GT, NVIDIA GeForce 9600M GT, PCIe, 512 MB Memory Module: BANK 0/DIMM0, 4 GB, DDR3, 1333 MHz, 0x04CD, 0x46332D3130363636434C392D344742535100 Memory Module: BANK 1/DIMM0, 4 GB, DDR3, 1333 MHz, 0x04CD, 0x46332D3130363636434C392D344742535100 AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.106.98.100.22) Bluetooth: Version 4.2.0f6 12982, 3 services, 15 devices, 1 incoming serial ports Network Service: Wi-Fi, AirPort, en1 Serial ATA Device: Samsung SSD 840 Series, 120.03 GB Serial ATA Device: MATSHITADVD-R UJ-868 USB Device: Built-in iSight USB Device: BRCM2046 Hub USB Device: Bluetooth USB Host Controller USB Device: Apple Internal Keyboard / Trackpad USB Device: IR Receiver Thunderbolt Bus:

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • Making Thunar the default file browser without hiding the desktop icons

    - by Manu
    I really dislike Ubuntu's default file browser, nautilus, and decided to opt for a lighter alternative (Thunar or Xfe). I've used the following script to change the default to Thunar, but now all my icons are gone from the desktop ! The files are still there, in /home/myid/Desktop, but they do not appear. Is there a way to show them, or is this a consequence of removing nautilus as the default file browser ? Can I modify the following script* in order to keep the icons ? *copied from https://help.ubuntu.com/...: ## Originally written by aysiu from the Ubuntu Forums ## This is GPL'ed code ## So improve it and re-release it ## Define portion to make Thunar the default if that appears to be the appropriate action makethunardefault() { ## I went with --no-install-recommends because ## I didn't want to bring in a whole lot of junk, ## and Jaunty installs recommended packages by default. echo -e "\nMaking sure Thunar is installed\n" sudo apt-get update && sudo apt-get install thunar --no-install-recommends ## Does it make sense to change to the directory? ## Or should all the individual commands just reference the full path? echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nMaking backup directory\n" ## Does it make sense to create an entire backup directory? ## Should each file just be backed up in place? sudo mkdir nonautilusplease echo -e "\nModifying folder handler launcher\n" sudo cp nautilus-folder-handler.desktop nonautilusplease/ ## Here I'm using two separate sed commands ## Is there a way to string them together to have one ## sed command make two replacements in a single file? sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-folder-handler.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-folder-handler.desktop echo -e "\nModifying browser launcher\n" sudo cp nautilus-browser.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop --browser/thunar/g' nautilus-browser.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-browser.desktop echo -e "\nModifying computer icon launcher\n" sudo cp nautilus-computer.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-computer.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-computer.desktop echo -e "\nModifying home icon launcher\n" sudo cp nautilus-home.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-home.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-home.desktop echo -e "\nModifying general Nautilus launcher\n" sudo cp nautilus.desktop nonautilusplease/ sudo sed -i -n 's/Exec=nautilus/Exec=thunar/g' nautilus.desktop ## This last bit I'm not sure should be included ## See, the only thing that doesn't change to the ## new Thunar default is clicking the files on the desktop, ## because Nautilus is managing the desktop (so technically ## it's not launching a new process when you double-click ## an icon there). ## So this kills the desktop management of icons completely ## Making the desktop pretty useless... would it be better ## to keep Nautilus there instead of nothing? Or go so far ## as to have Xfce manage the desktop in Gnome? echo -e "\nChanging base Nautilus launcher\n" sudo dpkg-divert --divert /usr/bin/nautilus.old --rename /usr/bin/nautilus && sudo ln -s /usr/bin/thunar /usr/bin/nautilus echo -e "\nRemoving Nautilus as desktop manager\n" killall nautilus echo -e "\nThunar is now the default file manager. To return Nautilus to the default, run this script again.\n" } restorenautilusdefault() { echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nRestoring backup files\n" sudo cp nonautilusplease/nautilus-folder-handler.desktop . sudo cp nonautilusplease/nautilus-browser.desktop . sudo cp nonautilusplease/nautilus-computer.desktop . sudo cp nonautilusplease/nautilus-home.desktop . sudo cp nonautilusplease/nautilus.desktop . echo -e "\nRemoving backup folder\n" sudo rm -r nonautilusplease echo -e "\nRestoring Nautilus launcher\n" sudo rm /usr/bin/nautilus && sudo dpkg-divert --rename --remove /usr/bin/nautilus echo -e "\nMaking Nautilus manage the desktop again\n" nautilus --no-default-window & ## The only change that isn't undone is the installation of Thunar ## Should Thunar be removed? Or just kept in? ## Don't want to load the script with too many questions? } ## Make sure that we exit if any commands do not complete successfully. ## Thanks to nanotube for this little snippet of code from the early ## versions of UbuntuZilla set -o errexit trap 'echo "Previous command did not complete successfully. Exiting."' ERR ## This is the main code ## Is it necessary to put an elseif in here? Or is ## redundant, since the directory pretty much ## either exists or it doesn't? ## Is there a better way to keep track of whether ## the script has been run before? if [[ -e /usr/share/applications/nonautilusplease ]]; then restorenautilusdefault else makethunardefault fi;

    Read the article

  • Poor upload/download speed on 2 x ADSL lines into a Cisco 2621XM

    - by 2020mobile
    Hi, Sorry never been on this site before so I apologise if not the right section or even forum. I have users complaining of very slow internetn connectivity on site and have checked with our ISP who have said that the line is testing at 8mb. We have 2 x BT lines that have our ISP broadand on them. Both lines go into a Cisco 2600 series router that then has a PIX firewall off that. Connectivity is successful just gone really slow and unable to download anything. Config is below: version 12.3 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname ROUTER-ADSL-INTERNET ! logging buffered 16384 informational enable secret xxx enable password xxx ! username xxx username xxx clock summer-time UK recurring last Sun Mar 1:00 last Sun Oct 1:00 aaa new-model ! ! aaa authentication login default local aaa authorization exec default local aaa session-id common ip subnet-zero no ip source-route ! ! ! ip audit notify log ip audit po max-events 100 no ip bootp server ip name-server 213.208.106.212 no mpls ldp logging neighbor-changes no ftp-server write-enable ! ! ! ! ! ! ! ! ! ! no voice hpi capture buffer no voice hpi capture destination ! ! ! ! ! ! ! ! interface ATM0/0 description 01270 111111 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/0 ip address 82.133.32.9 255.255.255.248 shutdown speed 100 full-duplex no cdp enable ! interface ATM0/1 description 01270 222222 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/1 ip address 217.146.115.49 255.255.255.240 duplex auto speed auto no cdp enable ! interface Dialer0 ip address 217.146.115.250 255.255.255.248 encapsulation ppp dialer pool 1 dialer-group 1 ppp authentication chap callin ppp chap hostname [email protected] ppp chap password 7 xxxxx ppp multilink ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! no logging trap access-list 10 permit 217.146.115.50 access-list 10 permit 82.133.32.10 access-list 10 deny any access-list 22 permit 217.146.115.50 access-list 22 permit 217.206.239.86 access-list 22 permit 82.133.32.10 access-list 22 deny any dialer-list 1 protocol ip permit no cdp run ! ! snmp-server community xxxxxx RO 10 snmp-server enable traps tty radius-server authorization permit missing Service-Type ! ! ! ! ! ! line con 0 exec-timeout 5 0 password 7 xxxxxx line aux 0 no exec line vty 0 4 access-class 22 in exec-timeout 5 0 password 7 xxxxxx transport input telnet ssh transport output none line vty 5 15 password 7 xxxxxx transport input telnet ssh ! ntp clock-period 17180095 ntp server 130.88.200.98 ! ! end Now my knowledge is very limited but ISP have said that while the lines are bonded each needs a seperate login as they've recently changed their L2TP router and that enforces the use of seperate logins - when the lines were configured we were given two logins. So, my question is what changes do I need to make to the config in order to get this working? it was ok before their change and I do have another login :- 01270 111111 - [email protected] 01270 222222 - [email protected] Apologies for the long email and thanks for taking the time to read it. Any more info I can provide please let me know. Thanks,

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • C++ custom exceptions: run time performance and passing exceptions from C++ to C

    - by skyeagle
    I am writing a custom C++ exception class (so I can pass exceptions occuring in C++ to another language via a C API). My initial plan of attack was to proceed as follows: //C++ myClass { public: myClass(); ~myClass(); void foo() // throws myException int foo(const int i, const bool b) // throws myException } * myClassPtr; // C API #ifdef __cplusplus extern "C" { #endif myClassPtr MyClass_New(); void MyClass_Destroy(myClassPtr p); void MyClass_Foo(myClassPtr p); int MyClass_FooBar(myClassPtr p, int i, bool b); #ifdef __cplusplus }; #endif I need a way to be able to pass exceptions thrown in the C++ code to the C side. The information I want to pass to the C side is the following: (a). What (b). Where (c). Simple Stack Trace (just the sequence of error messages in order they occured, no debugging info etc) I want to modify my C API, so that the API functions take a pointer to a struct ExceptionInfo, which will contain any exception info (if an exception occured) before consuming the results of the invocation. This raises two questions: Question 1 1. Implementation of each of the C++ methods exposed in the C API needs to be enclosed in a try/catch statement. The performance implications for this seem quite serious (according to this article): "It is a mistake (with high runtime cost) to use C++ exception handling for events that occur frequently, or for events that are handled near the point of detection." At the same time, I remember reading somewhere in my C++ days, that all though exception handling is expensive, it only becmes expensive when an exception actually occurs. So, which is correct?. what to do?. Is there an alternative way that I can trap errors safely and pass the resulting error info to the C API?. Or is this a minor consideration (the article after all, is quite old, and hardware have improved a bit since then). Question 2 I wuld like to modify the exception class given in that article, so that it contains a simple stack trace, and I need some help doing that. Again, in order to make the exception class 'lightweight', I think its a good idea not to include any STL classes, like string or vector (good idea/bad idea?). Which potentially leaves me with a fixed length C string (char*) which will be stack allocated. So I can maybe just keep appending messages (delimted by a unique separator [up to maximum length of buffer])... Its been a while since I did any serious C++ coding, and I will be grateful for the help. BTW, this is what I have come up with so far (I am intentionally, not deriving from std::exception because of the performance reasons mentioned in the article, and I am instead, throwing an integral exception (based on an exception enumeration): class fast_exception { public: fast_exception(int what, char const* file=0, int line=0) : what_(what), line_(line), file_(file) {/*empty*/} int what() const { return what_; } int line() const { return line_; } char const* file() const { return file_; } private: int what_; int line_; char const[MAX_BUFFER_SIZE] file_; }

    Read the article

  • Using dispatchertimer in combination with an asynchroneous call

    - by Civelle
    Hi. We have an issue in our Silverlight application which uses WCF and Entity Framework, where we need to trap the event whenever a user shuts down the application by closing the web page or the browser instead of closing the silverlight application. This is in order to verify if any changes have been made, in which case we would ask the user if he wants to save before leaving. We were able to accomplish the part which consists in trapping the closing of the web page: we wrote some code in the application object that have the web page call a method in the silverlight application object. The problem starts when in this method, we do an asynchroneous call to the Web Service to verify if changes have occured (IsDirty). We are using a DispatcherTimer to check for the return of the asynchroneous call. The problem is that the asynchroneous call never completes (in debug mode, it never ends up stepping into the _BfrServ_Customer_IsDirtyCompleted method), while it used to work fine before we added this new functionality. You will find belowthe code we are using. I am new to writing timers in combination with asynchroneous call so I may be doing something wrong but I cannot figure out what. I tried other things also but we without any success.. ====================== CODE ============================================== 'Code in the application object Public Sub New() InitializeComponent() RegisterOnBeforeUnload() _DispatcherTimer.Interval = New TimeSpan(0, 0, 0, 0, 500) End Sub Public Sub RegisterOnBeforeUnload() 'Register Silverlight object for availability in Javascript. Const scriptableObjectName As String = "Bridge" HtmlPage.RegisterScriptableObject(scriptableObjectName, Me) 'Start listening to Javascript event. Dim pluginName As String = HtmlPage.Plugin.Id HtmlPage.Window.Eval(String.Format("window.onbeforeunload = function () {{ var slApp = document.getElementById('{0}'); var result = slApp.Content.{1}.OnBeforeUnload(); if(result.length 0)return result;}}", pluginName, scriptableObjectName)) End Sub Public Function OnBeforeUnload() As String Dim userControls As List(Of UserControl) = New List(Of UserControl) Dim test As Boolean = True If CType(Me.RootVisual, StartPage).LayoutRoot.Children.Item(0).GetType().Name = "MainPage" Then If Not CType(CType(Me.RootVisual, StartPage).LayoutRoot.Children.Item(0), MainPage).FindName("Tab") Is Nothing Then If CType(CType(Me.RootVisual, StartPage).LayoutRoot.Children.Item(0), MainPage).FindName("Tab").Items.Count = 1 Then For Each item As TabItem In CType(CType(Me.RootVisual, StartPage).LayoutRoot.Children.Item(0), MainPage).Tab.Items If item.Content.GetType().Name = "CustomerDetailUI" _Item = item WaitHandle = New AutoResetEvent(False) DoAsyncCall() Exit End If Next End If End If End If If _IsDirty = True Then Return "Do you want to save before leaving." Else Return String.Empty End If End Function Private Sub DoAsyncCall() _Item.Content.CheckForIsDirty(WaitHandle) 'This code resides in the CustomerDetailUI UserControl - see below for the code End Sub Private Sub _DispatcherTimer_Tick(ByVal sender As Object, ByVal e As System.EventArgs) Handles _DispatcherTimer.Tick If Not _Item.Content._IsDirtyCompleted = True Then Exit Sub End If _DispatcherTimerRunning = False _DispatcherTimer.Stop() ProcessAsyncCallResult() End Sub Private Sub ProcessAsyncCallResult() _IsDirty = _Item.Content._IsDirty End Sub 'CustomerDetailUI code Public Sub CheckForIsDirty(ByVal myAutoResetEvent As AutoResetEvent) _AutoResetEvent = myAutoResetEvent _BfrServ.Customer_IsDirtyAsync(_Customer) 'This method initiates asynchroneous call to the web service - all the details are not shown here _AutoResetEvent.WaitOne() End Sub Private Sub _BfrServ_Customer_IsDirtyCompleted(ByVal sender As Object, ByVal e As BFRService.Customer_IsDirtyCompletedEventArgs) Handles _BfrServ.Customer_IsDirtyCompleted If _IsDirtyFromRefesh Then _IsDirtyFromRefesh = False If e.Result = True Then Me.Confirm("This customer has been modified. Are you sure you want to refresh your data ? " & vbNewLine & " Your changes will be lost.", "Yes", "No", Message.CheckIsDirtyRefresh) End If Busy.IsBusy = False Else If e.Result = True Then _IsDirty = True Me.Confirm("This customer has been modified. Would you like to save?", "Yes", "No", Message.CheckIsDirty) Else Me.Tab.Items.Remove(Me.Tab.SelectedItem) Busy.IsBusy = False End If End If _IsDirtyCompleted = True _AutoResetEvent.Set() End Sub

    Read the article

  • WinMo > ASMX WebException - how to get details?

    - by eidylon
    Okay, we've got an application which consists of a website hosting several ASMX webservices, and a handheld application running on WinMo 6.1 which calls the webservices. Been developing in the office, everything works perfect. Now we've gone to install it at the client's and we got all the servers set up and the handhelds installed. However the handhelds are now no longer able to connect to the webservice. I added in extra code in my error handler to specifically trap WebException exceptions and handle them differently in the logging to put out extra information (.Status and .Response). I am getting out the status, which is returning a [7], or ProtocolError. However when I try to read out the ResponseStream (using WebException.Response.GetResponseStream), it is returning a stream with CanRead set to False, and I thus am unable to get any further details of what is going wrong. So I guess there are two things I am asking for help with... a) Any help with trying to get more information out of the WebException? b) What could be causing a ProtocolError exception? Things get extra complicated by the fact that the client has a full-blown log-in-enabled proxy setup going on-site. This was stopping all access to the website initially, even from a browser. So we entered in the login details in the network connection for HTTP on the WinMo device. Now it can get to websites fine. In fact, I can even pull up the webservice fine and call the methods from the browser (PocketIE). So I know the device is able to see the webservices okay via HTTP. But when trying to call them from the .NET app, it throws ProtocolError [7]. Here is my code which is logging the exception and failing to read out the Response from the WebException. Public Sub LogEx(ByVal ex As Exception) Try Dim fn As String = Path.Combine(ini.CorePath, "error.log") Dim t = File.AppendText(fn) t.AutoFlush = True t.WriteLine(<s>===== <%= Format(GetDateTime(), "MM/dd/yyyy HH:mm:ss") %> =====<%= vbCrLf %><%= ex.Message %></s>.Value) t.WriteLine() t.WriteLine(ex.ToString) t.WriteLine() If TypeOf ex Is WebException Then With CType(ex, WebException) t.WriteLine("STATUS: " & .Status.ToString & " (" & Val(.Status) & ")") t.WriteLine("RESPONSE:" & vbCrLf & StreamToString(.Response.GetResponseStream())) End With End If t.WriteLine("=".Repeat(50)) t.WriteLine() t.Close() Catch ix As Exception : Alert(ix) : End Try End Sub Private Function StreamToString(ByVal s As IO.Stream) As String If s Is Nothing Then Return "No response found." // THIS IS THE CASE BEING EXECUTED If Not s.CanRead Then Return "Unreadable response found." Dim rv As String = String.Empty, bytes As Long, buffer(4096) As Byte Using mem As New MemoryStream() Do While True bytes = s.Read(buffer, 0, buffer.Length) mem.Write(buffer, 0, bytes) If bytes = 0 Then Exit Do Loop mem.Position = 0 ReDim buffer(mem.Length) mem.Read(buffer, 0, mem.Length) mem.Seek(0, SeekOrigin.Begin) rv = New StreamReader(mem).ReadToEnd() mem.Close() End Using Return rv.NullOf("Empty response found.") End Function Thanks in advance!

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >