Search Results

Search found 3262 results on 131 pages for '410 gone'.

Page 97/131 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • and the winner is Google Chrome

    - by anirudha
    Browser war really still uncompleted but here i tell that Why Google chrome better. 1. Easy to install:- as IE 9 Google chrome not force user to purchase a new OS. the chrome have a facelity that they install in minutes then less then other just like Firefox a another competitor or bloody fool  IE 9. 2. Easy to test: if you want to test their beta that’s no problem as well as Firefox. if user use Firefox 4 beta that they found that they can’t use many good plugin such as a big list the Web Developer tool and many other are one of them. in Chrome beta they provide you more then the last official release of chrome. 3. Google chrome Sync:-  i myself used  sync inside Firefox but nothing i found good and from a long time i feel nothing good and any feature in Firefox sync. but in google chrome their sync system is much better. When user login for sync in chrome they install everything and get back the user every settings they set the last time such as apps, autofill, bookmark ,extensions preference and theme. if you want to check bookmark from other browser that you can use google docs because google provided their bookmark backup in their docs account they have. performance:- after testing a website i found that a website open in 36 seconds in Firefox that Google chrome open them in 10 seconds. i found a interesting thing that when i test offline in IE 8 they show me in one or two seconds. i wonder how it’s possible after a long puzzle i found that IE was integrated software from Microsoft that the both software Visual studio and IE was integrated with windows. if user  test javascript in IE that the error they find show in visual studio not in IE as well as other software like chrome and IE. chrome not have a vast range of plugin as well as firefox so developer spent less time on chrome that would be a problem of future of chrome. interface comparison : the chrome have a common but user friendly interface then the user easily can use them. are you watching menu in Firefox 4. they make them complex as well as whole software IE 9. IE developer team thing that they can make everything fool by making a slogan HTML 5 inside IE. if anyone want to open a page in IE 9 that they show after some second. some time they show page not found even site is not gone wrong. when anyone want to use IE 9 developer tool that they thing that “ are this really  a developer tool ? ”. yeah they not make them for human as well as Firebug working team make firebug inside Firefox. they thing that how they can make public fool. Are you see that if you want to install Visual studio they force you to install sql server even you use other database system. a big stupidity of their tool can be found here today we hear that they Microsoft launched silverlight 5. are you know how Microsoft make silverlight yeah he copycat the idea of Adobe and their product Adobe Flash. that’s a other matter we can use .Net language instead of actionscript , lingo or shockwave.

    Read the article

  • Recover Lost Form Data in Firefox

    - by Asian Angel
    Have you ever filled in a text area or form in a webpage and something happens before you can finish it? If you like the idea of recovering that lost data then you will want to have a look at the Lazarus: Form Recovery extension for Firefox. Lazarus: Form Recovery in Action For our first example we chose the comment text box area for one of the articles here at the website. As you can see we were not finished typing in the whole comment yet… Notice the “Lazarus Icon” in the lower right corner. Note: We simulated accidental tab closures for our two examples. After getting our webpage opened up again all of our text was gone. Right clicking within the text area showed two options available…”Recover Text & Recover Form”. Notice that our lost text was listed as a “sub menu”…this could be extremely useful in matching up the appropriate text to the correct webpage if you had multiple tabs open before something happened. Click on the correct text listing to insert it. So easy to finish writing our comment without having to start from zero again. In our second example we chose the sign-up form page for the website. As before we were not finished filling in the form… Getting the webpage opened back up showed the same problem as before…all the entered text was lost. This time we right clicked in the browser window area and there was that wonderful “Recover Form Command” waiting to be used. One click and… All of our lost form data was back and we were able to finish filling in the form. For those who may be interested you can disable Lazarus: Form Recovery on individual websites using the “Context Menu” for the “Status Bar Icon” Options There are three sections in the options and you should take a quick look through them to make any desired modifications in how Lazarus: Form Recovery functions. The first “Options Area” focuses on display/access for the extension. The second “Options Area” allows you to expand the type of data retained, enable removal of data within a given time frame, set up a password, disable search indexing, and enable form data retention while in “Private Browsing Mode”. The third “Options Area” focuses on the Lazarus database itself. Conclusion If you have ever lost text area or form data before then you know how much time could be lost in starting over. Lazarus: Form Recovery helps provide a nice backup solution to get you up and running once again with a minimum of effort. Links Download the Lazarus: Form Recovery extension (Mozilla Add-ons) Download the Lazarus: Form Recovery extension (Extension Homepage) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxWhy Doesn’t AutoComplete Always Work in Firefox?Pass Variables between Windows Forms Windows without ShowDialog()Using Secure Login in FirefoxAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Do MORE with WebCenter

    - by Michael Snow
    We’ve been extremely busy here on the Oracle WebCenter team. We hope that you’ve all be keeping up with the interesting news each week. Last week was jammed full of GartnerPCC and Gartner360 buzz. If you missed any of the highlights – be sure to check out both Kellsey’s post from last week: Gartner PCC: A Shovel & Some Ah-Ha's and Christie’s overview of Loren Weinberg’s PCC presentation: "Here Today, Gone Tomorrow: Engage Your Customers or Lose Them"  . This week, we’ll be focusing on “Doing More with WebCenter” leading up to a great webcast scheduled for Thursday, March 22 (invite and registration link below). This is the 2nd in a series of 3 webcasts dedicated to expanding the understanding of the full capabilities of WebCenter. Yes – that might mean that you are not getting the full benefits of the software you already own or the expansion potential via upgrade to the full WebCenter Suite Plus. Tune in on Thursday 10 a.m. PT / 1 p.m. ET.  ++++++++++++++ Want to be a Speaker at Oracle OpenWorld 2012? Oracle Open World planning has already kicked off. We know that it is only March and next October is far in the distance. But planning has already started for Oracle OpenWorld 2012. So if you want to be a speaker and propose your own session for this year's event in San Francisco on September 30th - October 4th, starting thinking now!  The annual OpenWorld Call for Papers is now open until April 9th! All of the details to submit a paper are available here. Of course, the WebCenter team here is interested in sessions including case studies, thought-leadership, customer stories around any of the Oracle WebCenter solutions, but the Call for Papers is open to all Oracle topics. When submitting your topic, be sure to describe what you plan to discuss and the value of the presentation to other attendees. Sell your session, because there will be a lot of competition to be selected.  Bonus News: Speakers for selected sessions receive a complimentary full conference pass! Get your papers in and we'll see you in San Francisco! ~~~~~~~~~~~~~~~~~~~~~~ Webcast Series: Do More with Oracle WebCenter - Expand Beyond Content Management Enable Employees, Partners, and Customers to Do More with Your Content Dear [FIRSTNAME] [LASTNAME],-- Did you know that, in addition to content management, Oracle WebCenter now also includes comprehensive portal, composite application, collaboration, and Web experience management capabilities? Join us for this Webcast and learn how you can provide a new level of user engagement. Learn how Oracle WebCenter: Drives task-specific application data and content to a single screen for executing specific business processes Enables mixed internal and external environments where content can be securely shared and filtered with employees, partners, and customers, based upon role-based security Offers Web experience management, driving contextually relevant, social, and interactive online experiences across multiple channels Provides social features that enable sharing, activity feeds, collaboration, expertise location, and best-practices communities Learn how to do more with Oracle WebCenter. Register now for the Webcast. Register Now Join us for the second Webcast in the series "Do More With Oracle WebCenter". March 22, 2012 10 a.m. PT / 1 p.m. ET Presented by: Michelle Huff Senior Director, WebCenter Product Management, Oracle Greg Utecht Project Manager,IT Operations,TIES Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • What Counts For a DBA: Ego

    - by Louis Davidson
    Leaving aside, for a second, Freud’s psychoanalytical definitions, the term “ego” generally refers to a person’s sense of self, and their self-esteem. In casual usage, however, it usually appears in the adjectival form, “egotistical” (most often followed by “jerk”). You don’t need to be a jerk to be a DBA; humility is important. However, ego is important too. A good DBA needs a certain degree of self-esteem…a belief and pride in what he or she can do better than anyone else can. The ideal DBA needs to be humble enough to admit when they are wrong but egotistical enough to know when they are right, and to stand up for that knowledge and make their voice heard. In most organizations, the DBA team is seriously outnumbered by headstrong developers and clock driven managers, and “great” DBAs will often be outnumbered by…well…the not so great. In order to be heard in this environment, a DBA will not only need to be very skilled, but will also need a healthy dose of ego. As Freud might have put it, the unconscious desire of the DBA (the id) is for iron-fist control over their databases, and code that runs in them. However, the ego moderates this desire, seeking to “satisfy the id in realistic ways that, in the long term, bring benefit rather than grief“. In other words, the ego understands the need to exert a measure of control and self-belief, but also to tolerate and play nicely with developers and other DBAs. The trick, naturally, is learning how to be heard when it is important, but also to make everyone around you welcome that input, even when you have to be bold to make the “I know what I am talking about, and you…well…not so much” decisions. Consider a baseball team, bottom of the ninth inning of the championship game, man on first and down one run. Almost anyone on that team will have the ability to hit a home run, but only one or two will have the iron belief that they can pull it off in this critical, end-game situation. The player you need in this situation is the one who has passionately gone the extra mile preparing for just this moment, is bursting at the seams with self-confidence, and can look the coach in the eye and state, boldly, “Put me in, I am your best bet“. Likewise, on those occasions when high customer demand coincides with copious system errors, and panic is bubbling just beneath the surface, you don’t need the minimally qualified support person, armed with the “reboot and hope” technique (though that sometimes works!). You need the DBA who steps up and says, “Put me in” and has the skill and tenacity to back up those words and to fix the pinpoint and fix the problem, whatever it takes, while keeping customers and managers happy. Of course, the egotistical DBA will happily spend hours telling you how great they are at their job, and how brilliantly they put out a previous fire, and this is no guarantee that they can deliver. However, if an otherwise-humble DBA looks you in the eye and says, “I can do it”, then hear them out. Sometimes, this burst of ego will be exactly what’s required.

    Read the article

  • Rules and advice for logging?

    - by Nick Rosencrantz
    In my organization we've put together some rules / guildelines about logging that I would like to know if you can add to or comment. We use Java but you may comment in general about loggin - rules and advice Use the correct logging level ERROR: Something has gone very wrong and need fixing immediately WARNING: The process can continue without fixing. The application should tolerate this level but the warning should always get investigated. INFO: Information that an important process is finished DEBUG. Is only used during development Make sure that you know what you're logging. Avoid that the logging influences the behavior of the application The function of the logging should be to write messages in the log. Log messages should be descriptive, clear, short and concise. There is not much use of a nonsense message when troubleshooting. Put the right properties in log4j Put in that the right method and class is written automatically. Example: Datedfile -web log4j.rootLogger=ERROR, DATEDFILE log4j.logger.org.springframework=INFO log4j.logger.waffle=ERROR log4j.logger.se.prv=INFO log4j.logger.se.prv.common.mvc=INFO log4j.logger.se.prv.omklassning=DEBUG log4j.appender.DATEDFILE=biz.minaret.log4j.DatedFileAppender log4j.appender.DATEDFILE.layout=org.apache.log4j.PatternLayout log4j.appender.DATEDFILE.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p [%C{1}.%M] - %m%n log4j.appender.DATEDFILE.Prefix=omklassning. log4j.appender.DATEDFILE.Suffix=.log log4j.appender.DATEDFILE.Directory=//localhost/WebSphereLog/omklassning/ Log value. Please log values from the application. Log prefix. State which part of the application it is that the logging is written from, preferably with something for the project agreed prefix e.g. PANDORA_DB The amount of text. Be careful so that there is not too much logging text. It can influence the performance of the app. Loggning format: -There are several variants and methods to use with log4j but we would like a uniform use of the following format, when we log at exceptions: logger.error("PANDORA_DB2: Fel vid hämtning av frist i TP210_RAPPORTFRIST", e); In the example above it is assumed that we have set log4j properties so that it automatically write the class and the method. Always use logger and not the following: System.out.println(), System.err.println(), e.printStackTrace() If the web app uses our framework you can get very detailed error information from EJB, if using try-catch in the handler and logging according to the model above: In our project we use this conversion pattern with which method and class names are written out automatically . Here we use two different pattents for console and for datedfileappender: log4j.appender.CONSOLE.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.DATEDFILE.layout.ConversionPattern=%d [%t] %-5p %c - %m%n In both the examples above method and class wioll be written out. In the console row number will also be written our. toString() Please have a toString() for every object. EX: @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(" DwfInformation [ "); sb.append("cc: ").append(cc); sb.append("pn: ").append(pn); sb.append("kc: ").append(kc); sb.append("numberOfPages: ").append(numberOfPages); sb.append("publicationDate: ").append(publicationDate); sb.append("version: ").append(version); sb.append(" ]"); return sb.toString(); } instead of special method which make these outputs public void printAll() { logger.info("inbet: " + getInbetInput()); logger.info("betdat: " + betdat); logger.info("betid: " + betid); logger.info("send: " + send); logger.info("appr: " + appr); logger.info("rereg: " + rereg); logger.info("NY: " + ny); logger.info("CNT: " + cnt); } So is there anything you can add, comment or find questionable with these ways of using the logging? Feel free to answer or comment even if it is not related to Java, Java and log4j is just an implementation of how this is reasoned.

    Read the article

  • UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD)

    - by swafbrother
    UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD) Hello, I'm having trouble setting up a dual-boot (Ubuntu 12.04 LTS and Windows 8.1) in my ASUS K55VM laptop's hard drive disk (500 GB). I was mostly following tutorials for doing this, but at some point something has gone wrong. Up to now, I have followed these steps: I formatted my HDD into GPT. I clean-installed Windows 8.1. I didn't prevent Windows from choosing the partitions to use and it created these partitions: A Recovery partition (sda1). An EFI System Partition (sda2). A Microsoft Reserved Partition (sda3). A Windows Data Partition or C drive (sda4). I reduced the Windows Data Partition via Windows' Disk Management. I made a bootable USB Stick with Ubuntu 12.04 LTS from ISO, using Universal USB Installer. I created these partitions for Ubuntu: A Boot partition, mounted at /boot (sda5). A Root partition, mounted at / (sda6). A Swap partition (sda7). In Device for boot loader installation I chose: /dev/sda. Then, when I rebooted, it went straight into Ubuntu. So I installed Boot-Repair, and clicked on Recommended Repair. It automatically did its job without asking for anything. I rebooted and Grub showed up, with a lot of options. At this point I had a decent dual-boot setup; Ubuntu and both Windows entries worked fine: Ubuntu. Windows Boot UEFI Loader. Windows UEFI bkpbootmgfw.efi. I executed this command: sudo grub-install --force /dev/sda5. Then I tried to make Windows 8.1's Boot Manager the main boot manager, so that I could choose which OS to boot into from a menu. I downloaded EasyBCD on Windows. It showed 2 Ubuntu entries and 1 Windows entry. I went into BCD Deployment tab and clicked on Write MBR. At this point, I went into BIOS and made Windows Boot Manager the first boot option. When I rebooted, I got a black screen with the message efidisk read error, and then (I guess) it switched to the next boot option, which is Ubuntu, resulting in Grub showing up. From Grub, Ubuntu entry is working and so are both Windows entries. If I choose Ubuntu, it normally boots into Ubuntu. But if I choose Windows, it goes into Windows' boot manager. In Windows' boot manager, a menu shows up: Ubuntu. Ubuntu. Windows 8.1. If I choose Windows, it boots into Windows without any problem. If I choose Ubuntu, it boots into Grub (back to step 14). Here's my BootInfo Summary: http://paste.ubuntu.com/6698171/ Windows Boot Manager is clearly not working as expected; I can't directly boot into it and I can't boot into it from BIOS either (efidisk read error again). If I want to boot into Windows I need to boot into Grub first, which is the opposite of what I wanted. I need help at this point. What is the best thing I can do? Is there a more reliable and/or simpler way of acomplishing a satisfying dual-boot for this situation? Can someone provide a way for going back to step 8, where I had a more efficient dual-boot setup? If only I could undo what I did with Easy BCD and skip Windows' Boot Menu... Can someone provide a way to fix this mess? Thanks in advance and sorry for the length of this, I wanted to be exhaustive.

    Read the article

  • A Case for Oracle Fusion Middleware by Lucas Jellema

    - by JuergenKress
    An in-depth look at the interaction of people, processes, and technologies in the transition to a service-oriented architecture. Author's Note This article presents a profile of a fictitious organization, NOPERU. The story of NOPERU as told in this article is actually a collage of the events at some dozen organizations that I have been involved with over the past few years. None of these organizations sport all the characteristics of NOPERU - but all of them have gone through or are going through a similar transition as described here and all aspects of this article were taken from real life at one or usually many of these organizations. Background NOPERU (National Organization for Permits for Emissions and Resource Usage) is a public organization that continues to transform in terms of its business, organization and technology. Changing business requirements; new interaction channels; and increasing demands for more flexibility, faster throughput and lower costs drive these transformations, while technological evolution and new architecture patterns enable the change. NOPERU chose Oracle Fusion Middleware as the technology platform to implement the new architecture and required applications. This article takes a close look at NOPERU's journey from its origins in the early 1990s as a largely paper-based entity with regional databases and client-server Oracle Forms applications. Its upcoming business objectives are introduced: what is required of the organization and what the higher goals behind these requirements are. The architecture roadmap is described at a high level as well as drilled down to a service oriented design. Based on the architecture roadmap and the business requirements and NOPERU went through a technology selection to determine the technology stack with which the future would be realized in terms of IT. The article discusses that selection and details the projects subsequently planned (and executed to date). The new architecture and technology as well as the introduction of an Agile development method have had substantial consequences for the IT organization, the processes and individual staff members. The approach NOPERU has adopted with regard to the people and the organization is portrayed. Finally, the article discusses many conclusions that NOPERU has drawn that may benefit itself and other organizations. Introducing NOPERU NOPERU is a national organization charged with issuing permits for excessive emissions (i.e., carbon dioxide) and disproportionate usage of such resources as energy or water. Anyone-whether a commercial enterprise, government agency or private person--who emits or consumes more than what is considered "fair usage" requires such a permit. When someone builds an outdoor heated swimming pool, for example, or open-air terrace heating, such a permit needs to be obtained. When a company installs new, energy-intensive equipment, such as water boilers or deep freezers, it too needs to get a NOPERU permit. Government-sponsored projects at every level that involve consumption of large quantities of fresh water or production of high volumes of emissions must turn to NOPERU for a permit. Without the required license, any interested party can get a court to immediately put a stop to the disputed activity. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Lucas Jellema,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Uninstall, Disable, or Remove Windows 7 Media Center

    - by Mysticgeek
    Although Windows 7 Media Center has improved a lot over previous versions of Windows, but you might want to disable it for different reasons. Here we take a look at a couple of methods to get rid of it. There are a variety of reasons you might want to disable Windows 7 Media Center. Maybe you own a business and don’t want it to run on the machines. Or perhaps you don’t use it at all and just don’t want it around. Turn Off WMC Using Programs and Features Probably the easiest way to get rid of it on all versions of Windows 7 is to open Control Panel and select Programs and Features. This method is similar to disabling Internet Explorer 8 in Windows 7. On the left hand panel click on Turn Windows Features on or off. Scroll down to Media Features and expand the folder. Then Uncheck Windows Media Center… You’ll get a verification message making sure you want to disable it, click Yes. Then the box next to Windows Media Center will be empty…click OK. Wait while WMC is disabled… To complete the process a reboot is required. After getting back from the restart, the WMC icon will be gone and there won’t be any way to launch it. Re-enable WMC If you want to re-enable it, just go back in and recheck it. Again you’ll need to wait while it’s configured, but when it’s done, a restart is not required.   Disable Media Center Using Group Policy Note: This process uses Group Policy Editor which is not available in Home versions of Windows 7. Click on the Start menu and type gpedit.msc into the Search box and hit Enter. Now navigate to User Configuration \ Administrative Templates \ Windows Components \ Windows Media Center. Double-click on Do not allow Windows Media Center to run. Then select the radio button next to Enabled, click OK and close out of Group Policy Editor. Now if a user tries to launch WMC they will get the following message. Conclusion If you’re not a fan of Windows Media Center or want to disable it for whatever reason, the process is simple and there are a couple of ways you can do it. WMC is not included in Starter or Home Basic versions of Windows 7. If you’re new to Windows 7 Media Center, you might want to check out our guide on getting started and setting up live TV. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Disable Windows Mobility Center in Windows 7 or VistaMake Outlook Faster by Disabling Unnecessary Add-InsSchedule Updates for Windows Media CenterRemove "Map Network Drive" Menu Item from Windows Vista or XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa !

    Read the article

  • IE9 Beta

    - by Daniel Moth
    I've been using Internet Explorer 8 since the early pre-release bits, but I never tried IE9 until today – the day the Beta is available. I downloaded it from here: http://www.beautyoftheweb.com/ The download took longer than what I expected, but I was doing other stuff, so no bother. After coming down, it asked me to reboot my computer. Really hate when apps do that, but I did it anyway. The first time I launched it, it prompted me with a list of add-ons I should disable including the start-up time that I could save fore each one. It even let me configure the prompt so, for example, it won't prompt me again unless an add-on contributes to more than 1 second of the startup time. Cool. First thing I noticed is that the search bar had gone and, as you'd expect, you have to search from the address box. I totally despise this feature. The first thing I've been doing with all versions of IE is to turn off the automatic searching from the address bar and now I have no way of searching if I do that. Ridiculous. The second thing I notice is that the tabs are next to the address bar and cannot be moved to go below it. One word for that decision: appalling (and, no, I didn't accidentally drop an 'e' and added an 'l' in the previous word). The third thing I notice to the right is the favorites button (star icon) and when I click on it, it brings up the favorites explorer under it on the right; then I pin the explorer and it jumps to the left(!). Why move the entry point to this feature to the right instead of leaving it on the left is beyond me (other than wanting to retrain me on what I've been used to for all this time), but the fact that pinning it makes it jump sides is… an "astonishing" design decision. As I browse I notice a little annoying pop up in the bottom left every time I hover over a link; there is no status bar. I correctly guessed to right click at the top and turn on the status bar (which also got rid of the popup thereafter) and while I am at it, I bring back my favorites bar which was hidden by default (and am pleased to see that all my favorites are still there). The next thing I notice, I like: IE9 is fast. No joke, I visit sites and they seem to be loading visibly much faster – try it! Beyond the speed, I am interested to find out what else is new. I searched and found a few good links: What's new in Internet Explorer 9 Internet Explorer 9 Features (check out the links under "Clean") Top Features If you are a developer, check out IE's msdn home for many articles, e.g. this section on Canvas and SVG. Either way: wherever you are, get IE9 Beta now and judge for yourself. If you don't like it, you can always uninstall (which auto-restores the previous version). Comments about this post welcome at the original blog.

    Read the article

  • A tiny Utility to recycle an IIS Application Pool

    - by Rick Strahl
    In the last few weeks I've annoyingly been having problems with an area on my Web site. It's basically ancient articles that are using ASP classic pages and for reasons unknown ASP classic locks up on these pages frequently. It's not an individual page, but ALL ASP classic pages lock up. Ah yes, gotta old tech gone bad. It's not super critical since the content is really old, but still a hassle since it's linked content that still gets quite a bit of traffic. When it happens all ASP classic in that AppPool dies. I've been having a hard time tracking this one down - I suspect an errant COM object I have a Web Monitor running on the server that's checking for failures and while the monitor can detect the failures when the timeouts occur, I didn't have a good way to just restart that particular application pool. I started putzing around with PowerShell, but - as so often seems the case - I can never get the PowerShell syntax right - I just don't use it enough and have to dig out cheat sheets etc. In any case, after about 20 minutes of that I decided to just create a small .NET Console Application that does the trick instead, and in a few minutes I had this:using System; using System.Collections.Generic; using System.Text; using System.DirectoryServices; namespace RecycleApplicationPool { class Program { static void Main(string[] args) { string appPoolName = "DefaultAppPool"; string machineName = "LOCALHOST"; if (args.Length > 0) appPoolName = args[0]; if (args.Length > 1) machineName = args[1]; string error = null; DirectoryEntry root = null; try { Console.WriteLine("Restarting Application Pool " + appPoolName + " on " + machineName + "..."); root = new DirectoryEntry("IIS://" + machineName + "/W3SVC/AppPools/" +appPoolName); Console.WriteLine(root.InvokeGet("Name")); root.Invoke("Recycle"); Console.WriteLine("Application Pool recycling complete..."); } catch(Exception ex) { error = "Error: Unable to access AppPool: " + ex.Message; } if ( !string.IsNullOrEmpty(error) ) { Console.WriteLine(error); return; } } } } To run in you basically provide the name of the ApplicationPool and optionally a machine name if it's not on the local box. RecyleApplicationPool.exe "WestWindArticles" And off it goes. What's nice about AppPool recycling versus doing a full IISRESET is that it only affects the AppPool, and more importantly AppPool recycles happen in a staggered fashion - the existing instance isn't shut down immediately until requests finish while a new instance is fired up to handle new requests. So, now I can easily plug this Executable into my West Wind Web Monitor as an action to take when the site is not responding or timing out which is a big improvement than hanging for an unspecified amount of time. I'm posting this fairly trivial bit of code just in case somebody (maybe myself a few months down the road) is searching for ApplicationPool recyling code. It's clearly trivial, but I've written batch files for this a bunch of times before and actually having a small utility around without having to worry whether Powershell is installed and configured right is actually an improvement. Next time I think about using PowerShell remind me that it's just easier to just build a small .NET Console app, 'k? :-) Resources Download Executable and VS Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7  .NET  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Performing an upgrade from TFS 2008 to TFS 2010

    - by Enrique Lima
    I recently had to go through the process of migrating a TFS 2008 SP1 to a TFS 2010 environment. I will go into the details of the tasks that I went through, but first I want to explain why I define it as a migration and not an upgrade. When this environment was setup, based on support and limitations for TFS 2008, we used a 32 bit platform for the TFS Application Tier and Build Servers.  The Data Tier, since we were installing SP1 for TFS 2008, was done as a 64 bit installation.  We knew at that point that TFS 2010 was in the picture so that served as further motivation to make that a 64bit install of SQL Server.  The SQL Server at that point was a single instance (Default) installation too.  We had a pretty good strategy in place for backups of the databases supporting the environment (and this made the migration so much smoother), so we were pretty familiar with the databases and the purpose they serve. I am sure many of you that have gone through a TFS 2008 installation have encountered challenges and trials.  And likely even more so if you, like me, needed to configure your deployment for SSL.  So, frankly I was a little concerned about the process of migrating.  They say practice makes perfect, and this environment I worked on is in some way my brain child, so I was not ready nor willing for this to be a failure or something that would impact my client’s work. Prior to going through the migration process, we did the install of the environment.  The Data Tier was the same, with a new Named instance in place to host the 2010 install.  The Application Tier was in place too, and we did the DefaultCollection configuration to test and validate all components were in place as they should. Anyway, on to the tasks for the migration (thanks to Martin Hinselwood for his very thorough documentation): Close access to TFS 2008, you want to make sure all code is checked in and ready to go.  We stated a difference of 8 hours between code lock and the start of migration to give time for any unexpected delay.  How do we close access?  Stop IIS. Backup your databases.  Which ones? TfsActivityLogging TfsBuild TfsIntegration TfsVersionControl TfsWorkItemTracking TfsWorkItemTrackingAttachments Restore the databases to the new Named Instance (make sure you keep the same names) Now comes the fun part! The actual import/migration of the databases.  A couple of things happen here. The TfsIntegration database will be scanned, the other databases will be checked to validate they exist.  Those databases will go through a process of data being extracted and transferred to the TfsVersionControl database to then be renamed to Tfs_<Collection>. You will be using a tool called tfsconfig and the option import. This tool is located in the TFS 2010 installation path (C:\Program Files\Microsoft Team Foundation Server 2010\Tools),  the command to use is as follows:    tfsconfig import /sqlinstance:<instance> /collectionName:<name> /confirmed Where <instance> is going to be the SQL Server instance where you restored the databases to.  <name> is the name you will give the collection. And to explain /confirmed, well this means you have done a backup of the databases, why?  well remember you are going to merge the databases you restored when you execute the tfsconfig import command. The process will go through about 200 tasks, once it completes go to Team Foundation Server Administration Console and validate your imported databases and contents. We’ll keep this manageable, so the next post is about how to complete that implementation with the SSL configuration.

    Read the article

  • So You Want to Be a Social Media Director

    - by Mike Stiles
    Do you want to be a Social Media Director? Some say the title is already losing its relevance; that social should be a basic skill that is required and used no matter what your position is inside the enterprise. I suppose that’s visionary, and a fun thing for thought leaders to say. But in the vast majority of business organizations, we’re so far away from that reality that the thought of not having someone driving social’s implementation and guiding its proper usage conjures up images of anarchy. That said, social media has become so broad, so catch-all, and so extended across business functions, that today’s Social Media Director, depending on the size of their staff, must make jacks-of-all-trades look like one-trick-ponies. Just as the purview of the CMO has grown all-encompassing, the disciplines required of their heads of social are stacking up. Master of Content Every social pipeline you build must stay filled, with quantity and quality. Content takes time, and the job never stops. Never. And no, it’s not true that anybody can write. Master of Customer Experience You must have a passion for hearing from customers and making them really happy. Master of PR You must know how to communicate and leverage the trust you’ve built when crises strike. Couldn’t hurt to be a Master of Politics. Master of Social Technology So many social management tools on the market. You have to know what social tech ecosystem makes sense and avoid piecemeal point solutions. Master of Business Development Social for selling and prospecting is hot, and you have to know how to use social to do it. Master of Analytics Nothing else matters if you can’t prove social is helping the brand. That’s right, creative content guy has to also be a math and stats geek. Good luck with that. Master of Paid Media You’ve got to learn the language, learn the tactics, learn the vendors and learn how to measure results. Master of Education Guess who gets to teach everyone who has no clue how to use social for business. Master of Personal Likability You’ll be leading the voice, tone, image and personality of the brand. If you don’t instinctively know how to be liked by actual people, the brand will be starting from a deficit. How deep must you go in this parade of masteries? Again, that depends on your employer’s maturity level in social. Serious players recognize these as distinct disciplines requiring true experts for maximum effect. Less serious players will need you to execute personally in many of these areas. Do the best you can, and try to grow quickly at each. If you’re the sole person executing all social…well…you’re in the game of managing expectations and trying to socially educate your employer. The good news is, you should be making a certifiable killing. If you’re alone and your salary is modest, time to understand how many brands out there crave what you’ve mastered. Not to push back against thought leaders, but the need for brand social leadership has not gone away…not even a little bit. @mikestiles @oraclesocialPhoto: Stefan Wagner, freeimages.com

    Read the article

  • Dark Sun Dispatch 001

    - by Chris Williams
    If you aren't into tabletop (aka pen & paper) RPGs, you might as well click to the next post now... Still here? Awesome. I've recently started running a new D&D 4.0 Dark Sun campaign. If you don't know anything about Dark Sun, here's a quick intro: The campaign take place on the world of Athas, formerly a lush green world that is now a desert wasteland. Forests are rare in the extreme, as is water and metal. Coins are made of ceramic and weapons are often made of hardened wood, bone or obsidian. The green age of Athas was centuries ago and the current state was brought about through the reckless use of sorcerous magic. (In this world, you can augment spells by drawing on the life force of the world & people around you. This is called defiling. Preserving magic draws upon the casters life force and does not damage the surrounding world, but it isn't as powerful.) Humans are pretty much unchanged, but the traditional fantasy races have changed quite a bit. Elves don't live in the forest, they are shifty and untrustworthy desert traders known for their ability to run long distances through the wastes. Halflings are not short, fat, pleasant little riverside people. Instead they are bloodthirsty feral cannibals that roam the few remaining forests and ride reptilians beasts akin to raptors. Gnomes are extinct, as are orcs. Dwarves are mostly farmers and gladiators, and live out in the sun instead of staying under the mountains. Goliaths are half-giants, not known for their intellect. Muls are a Dwarf & Human crossbreed that displays the best traits of both races (human height and dwarven stoutness.) Thri-Kreen are sentient mantis people that are extremely fast. Most of the same character classes are available, with a few new twists. There are no divine characters (such as Priests, Paladins, etc) because the gods are gone. Nobody alive today can remember a time when they were still around. Instead, some folks worship the elemental forces (although they don't give out spells.) The cities are all ruled by Sorcerer King tyrants (except one city: Tyr) who are hundreds of years old and still practice defiling magic whenever they please. Serving the Sorcerer Kings are the Templars, who are also defilers and psionicists. Crossing them is as bad, in many cases, as crossing the Kings themselves. Between the cities you have small towns and trading outposts, and mostly barren desert with sometimes 4-5 days on foot between towns and the nearest oasis. Being caught out in the desert without adequate supplies and protection from the elements is pretty much a death sentence for even the toughest heroes. When you add in the natural (and unnatural) predators that roam the wastes, often in packs, most people don't last long alone. In this campaign, the adventure begins in the (small) trading fortress of Altaruk, a couple weeks walking distance from the newly freed city of Tyr. A caravan carrying trade goods from Altaruk has not made it to Tyr and the local merchant house has dispatched the heroes to find out what happened and to retrieve the goods (and drivers) if possible. The unlikely heroes consist of a human shaman, a thri-kreen monk, a human wizard, a kenku assassin and a (void aspect) genasi swordmage. Gathering up supplies and a little liquid courage, they set out into the desert and manage to find the northbound tracks of the wagon. Shortly after finding the tracks, they are ambushed by a pack of silt-runners (small lizard people with very large teeth and poisoned pointy spears.) The party makes short work of the creatures, taking a few minor wounds in the process. Proceeding onward without resting, they find the remains of the wagon and manage to sneak up on a pack of Kruthiks picking through the rubble and spilled goods. Unfortunately, they failed to take advantage of the opportunity and had a hard fight ahead of them. The party defeated the kruthiks, but took heavy damage (and almost lost a couple of their own) in the process. Once the kruthiks were dispatched, they followed a set of tracks further north to a ruined tower...

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • Change The Windows 7 Start Orb the Easy Way

    - by Matthew Guay
    Want to make your Windows 7 PC even more unique and personalized?  Then check out this easy guide on how to change your start orb in Windows 7. Getting Started First, download the free Windows 7 Start Button Changer (link below), and extract the contents of the folder.  It contains the app along with a selection of alternate start button orbs you can try out.   Before changing the start button, we advise creating a system restore point in case anything goes wrong.  Enter System Restore in your Start menu search, and select “Create a restore point”. Please note:  We tested this on both the 32 bit and 64 bit editions of Windows 7, and didn’t encounter any problems or stability issues.  That said, it is always prudent to make a restore point just in case a problem did happen. Click the Create button… Then enter a name for the restore point, and click Create. Changing the Start Orb. Once this is finished, run the Windows 7 Start Button Changer as administrator by right-clicking on it and selecting “Run as administrator”.  Accept the UAC prompt that will appear. If you don’t run it as an administrator, you may see the following warning.  Click Quit, and then run again as administrator. You should now see the Windows 7 Start Button Changer.  On the left it shows what your current (default) start orb looks like inactive, when hovered over, and when selected.  Click the orb on the right to select a new start button. Here we browsed to the sample orbs folder, and selected one of them.  Let’s give Windows the Media Center orb for a start orb.  Click the orb you want, and then select open. When you click Open, your screen will momentarily freeze and your taskbar will disappear.  When it reappears, your computer will have gone from having the old, default Start orb style… …to your new, exciting Start orb!  Here it is default, and glowing when hovered over. Now, the Windows 7 Start Orb Changer will change, and show your new Start orb on the left side.  If you would like to revert to the default orb, simply click the folder icon to restore it.  Or, if you would like to change the orb again, restore the original first and then select a new one. The orbs don’t have to be round; here’s a fancy Windows 7 logo as the start button. The start orb change will work in the Aero and Aero basic (which Windows 7 Start uses) themes, but will not show up in the classic, Windows 2000 style themes.  Here’s how the new start button looks with the Aero Classic theme: There are tons of orbs available, including this cute smiley, so choose one that you like to make your computer uniquely yours. Conclusion This is a cute way to make your desktop unique, and can be a great way to make a truly personalized theme.  Let us know your favorite Start orb! Link Download the Windows 7 Start Button Changer Find more Start orbs at deviantART Similar Articles Productive Geek Tips Change the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateQuick Tip: Change the Registered Owner in WindowsSpeed up Windows Vista Start Menu Search By Limiting ResultsWhy Does My Password Expire in Windows?Change Your Computer Name in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Laptop monitor stopped working and can't be re-enabled on a Dell Latitude E6410

    - by xektrum
    I'm using Ubuntu 12.04 (upgraded from 11.10), everything seemed to work fine until today when my laptop monitor suddenly stopped working. Here are the facts: My laptop is a Dell Latitude E6410, Intel graphics. External Monitor is attached through a docking station. Everything worked fine for about 6-7 month, then upgraded to 12.04 Issue started today after a week of upgrade. I think the issue started after I ran CounterStrike 1.6, both monitors blinked and then only the attached monitor which is connected to a docking station continued to work I thought at first that was a transient issue but then I've rebooted, removed the battery but the same happens. Laptop Monitor and external monitor work fine up to login screen, but after I login it goes black Whenever I try to re-enable laptop monitor from Display Manager I get errors: The selected configuration for displays could not be applied could not set the configuration for CRTC 63 Not sure what technical details are required but here are some: $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected (normal left inverted right x axis y axis) 1440x900 60.0 + 40.0 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) $ tail /var/log/Xorg.0.log [ 8367.132] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.132] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy I'm using gnome-shell, and the only ways I've been able to get both display working have been: 1) Booting with laptop disconnected from docking and then re attach external with VGA instead of DVI, but only worked for a session. 2) Removing xserver-xorg-video-intel, but then I gnome-shell is gone as well as dri I would appreciate any suggestions. Regards, ============================= WORKAROUND FOUND ============================= So I have tried few things and here is what worked: I've installed a newer version of xserver-xorg-video-intel (2.19 vs 2.17) from ppa:xorg-edgers/ppa, it didn't work at first, it was only showing low graphics mode, so I tried with a different linux-image 3.0.0-19-generic-pae instead of 3.2.0-24-generic-pae, which I believe is 12.04 precise default, then everything started to work again, Now I've installed 3.4.0-1-generic-pae from same ppa and everything goes flawless so I believe the issue is either with linux-image 3.0.0-19-generic-pae or xserver-xorg-video-intel 2.17. Hope this helps someone in the future. PS: Now xrandr shows multiple modes for my laptop monitor $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected 1440x900+1680+0 (normal left inverted right x axis y axis) 303mm x 189mm 1440x900 60.0*+ 59.9 40.0 1360x768 59.8 60.0 1152x864 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis)

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • Professional Developers, may I join you?

    - by Ben
    I currently work in technical support for a software/hardware company and for the most part it's a good job, but it's feeling more and more like I'm getting 'stuck' here. No raises in the 5 years I've been here, and lately there seems to be more hiring from the outside than promotion from within. The work I do is more technical than end-user support, as we deal primarily with our field technicians who have a little more technical skill than the general user base. As a result I get into much more technical support issues... often tracking down bugs in our software, finding performance bottlenecks in our database schema, etc. The work I'm most proud of are the development projects I've come up with on my own, and worked on during lunch breaks and slow periods in Support. Over the years I've written a number of useful utilities for the company. Diagnostic type applications that several departments use and appreciate. These include apps that simulate our various hardware devices, log file analysis, time-saving utilities for our work processes, etc. My best projects have been the hardware simulation programs, which are the type of thing we probably wouldn't have put a full-time developer on had anyone thought to do it, but they've ended up being popular and useful enough to be used by development, QA, R&D, and Support. They allow us to interface our software with simulated hardware, rather than clutter up our work areas with bulky, hard to acquire equipment. Since starting here my life has moved forward (married, kid, one more on the way), but it feels like my career has not. I still earn what I earned walking in the door my first day. Company budget is tight, bonuses have gone down, and no raises or cost of living / inflation adjustments either. As the sole source of income for my family I feel I need to do more, and I'd like to have a more active role in creating something at work, not just cleaning up other people's mistakes. I enjoy technical work, and I think development is the next logical step in my career. I'd like to bring some "legitimacy" to my part-time development work, and make myself a more skilled and valuable employee. Ultimately if this can help me better support my family, that would be ideal. Can I make the jump to professional developer? I have an engineering degree, but no formal education in computer science. I write WinForms apps using the .NET framework, do some freelance web development, have volunteered to write software for a nonprofit, and have started experimenting with programming microcontrollers. I enjoy learning new things in the limited free time I have available. I think I have the aptitude to take on a development role, even in an 'apprentice' capacity if such an option is possible. Have any of you moved into development like this? Do any of you developers have any advice or cautionary tales? Are there better career options I haven't thought of? I welcome any and all related comments and thank you in advance for posting them.

    Read the article

  • It's like I'm in recovery mode after update, but I'm not

    - by mawburn
    I used the Ubuntu software updater and updated to the most recent packages. After the last update today, it's like I have gone into recovery mode, but I haven't. I am running UbuntuGNOME First, everything looks like this: Switching to dark mode does nothing. Also, default applications do not work. Such as Startup and the default screenshot application. Everything was working fine before the latest software update. System Info Ubuntu 14.04 LTS Gnome-Shell 3.10.4 Kernel 3.13.0-29 I can't figure out how to get an update history, but this is almost a fresh install. It's about a week old install and this is the 3rd time I've used the Ubuntu Software Update. I am running AMD ATI HD6700 with the proprietary Catalyst drivers. I tried to provide all information that I thought would be useful, if you need any more please let me know. Edit - I believe something went wrong within these updates: Update Log: Start-Date: 2014-06-09 19:07:07 Commandline: aptdaemon role='role-commit-packages' sender=':1.68' Install: libgnome-desktop-3-10:amd64 (3.12.0-0~eugenesan~trusty2) Upgrade: gnome-session-common:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gnome-session-bin:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gir1.2-gnomedesktop-3.0:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), gnome-session:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), python-libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libspice-server1:amd64 (0.12.4-0nocelt2, 0.12.4-0nocelt2.02~eugenesan~trusty1), gir1.2-mutter-3.0:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), xserver-xorg-video-qxl:amd64 (0.1.1-0ubuntu3, 0.1.1-0ubuntu3.01), libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libxml2:i386 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), gnome-desktop3-data:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), mutter:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), mutter-common:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), libxml2-utils:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libmutter0c:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1) End-Date: 2014-06-09 19:07:12 I also installed Citrix Receiver today, following the tutorial here: Citrix Receiver 12.1 on Ubuntu 14.04 64-bit Log Start-Date: 2014-06-09 18:59:06 Commandline: apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 Install: libmotif-common:amd64 (2.3.4-5, automatic), libatk1.0-0:i386 (2.10.0-2ubuntu2, automatic), libxft2:i386 (2.3.1-2, automatic), libgraphite2-3:i386 (1.2.4-1ubuntu1, automatic), nspluginviewer:i386 (1.4.4-0ubuntu5, automatic), libpango-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcursor1:i386 (1.1.14-1, automatic), libmotif4:i386 (2.3.4-5), libxm4:amd64 (2.3.4-5, automatic), libxm4:i386 (2.3.4-5, automatic), libxp6:i386 (1.0.2-1ubuntu1), libpangocairo-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcb-render0:i386 (1.10-2ubuntu1, automatic), libthai0:i386 (0.1.20-3, automatic), libharfbuzz0b:i386 (0.9.27-1, automatic), libpixman-1-0:i386 (0.30.2-2ubuntu1, automatic), libpangoft2-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libcairo2:i386 (1.13.0~20140204-0ubuntu1, automatic), lib32z1:amd64 (1.2.8.dfsg-1ubuntu1), libjasper1:i386 (1.900.1-14ubuntu3, automatic), libgtk2.0-0:i386 (2.24.23-0ubuntu1.1, automatic), nspluginwrapper:amd64 (1.4.4-0ubuntu5), libuil4:amd64 (2.3.4-5, automatic), libuil4:i386 (2.3.4-5, automatic), libxcb-shm0:i386 (1.10-2ubuntu1, automatic), libxmu6:i386 (1.1.1-1, automatic), libc6-i386:amd64 (2.19-0ubuntu6), libxinerama1:i386 (1.1.3-1, automatic), libgdk-pixbuf2.0-0:i386 (2.30.7-0ubuntu1, automatic), libxcomposite1:i386 (0.4.4-1, automatic), libmrm4:amd64 (2.3.4-5, automatic), libmrm4:i386 (2.3.4-5, automatic), libdatrie1:i386 (0.2.8-1, automatic), libxrandr2:i386 (1.4.2-1, automatic), libxpm4:i386 (3.5.10-1) End-Date: 2014-06-09 18:59:11

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Oracle Employees Support New World Record for IYF Children's Hour

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 960 students ‘crouched’, ‘touched’ and ‘set’ under the watchful eye of International Rugby Referee Alain Roland, and supported by Oracle employees, to successfully set a new world record for the World’s Largest Scrum to raise funds and awareness for the Irish Youth Foundation. Last year Oracle Employees supported the Irish Youth Foundation by donating funds from their payroll through the Giving Tree Appeal. We were the largest corporate donor to the IYF by raising €3075. To acknowledge our generosity the IYF asked Oracle Leadership in Society team members to participate in their most recent campaign which was to break the Guinness Book of Records by forming the World’s Largest Rugby Scrum. This was a wonderful opportunity for Oracle’s Leadership in Society to promote the charity, support education and to make a mark in the Corporate Social Responsibility field. The students who formed the scrum also gave up their lunch money and raised a total of €3000. This year we hope Oracle Employees will once again support the IYF with the challenge to match that amount. On the 24th of October the sun shone down on the streaming lines of students entering the field. 480 students were decked out in bright red Oracle T-Shirts against the other 480 in blue and white jerseys - all ready to form a striking scrum. Ryan Tubridy the host of the event made the opening announcement and with the blow of a whistle the Scum began. 960 students locked tight together with the Leinster players also at each side. Leinster Manager Matt O’Connor was there along with presenters Ryan Tubridy and George Hook to assist with getting the boys in line and keeping the shape of the scrum. In accordance with Guinness Book of Records rules, the ball was fed into the scrum properly by Ireland and Leinster scrum-half, Eoin Reddan, and was then passed out the line to his Leinster team mates including Ian Madigan, Brendan Macken and Jordi Murphy, also proudly sporting the Oracle T-Shirt. The new World Record was made, everyone gave a big cheer and thankfully nobody got injured! Thank you to everyone in Oracle who donated last year through the Giving Tree Appeal. Your generosity has gone a long way to support local groups both. Last year’s donation was so substantial that the IYF were able to spread it across two youth groups: The first being Ballybough Youth Project in Dublin. The funding gave them the chance to give 24 young people from their project the chance to get away from the inner city and the problems and issues they face in their daily life by taking a trip to the Cavan Centre to spend a weekend away in a safe and comfortable environment; a very rare holiday in these young people’s lives. The Rahoon Family Centre. Used the money to help secure the long term sustainability of their project. They act as an educational/social/fun project that has been working with disadvantaged children for the past 16 years. Their aim is to change young people’s future with fun /social education and supporting them so they can maximize their creativity and potential. We hope you can help support this worthy cause again this year, so keep an eye out for the Children’s Hour and Giving Tree Appeal! About the Irish Youth Foundation The IYF provides opportunities for marginalised children and young people facing difficult and extreme conditions to experience success in their lives. It passionately believes that achievement starts with opportunity. The IYF’s strategy is based on providing safe places where children can go after school; to grow, to learn and to play; and providing opportunities for teenagers from under-served communities to succeed and excel in their lives. The IYF supports innovative grassroots projects operated by dedicated professionals who understand young people and care about them. This allows the IYF to focus on supporting young people at risk of dropping out of school and, in particular, on the critical transition from primary to secondary school; and empowering teenagers from disadvantaged neighborhoods to become engaged in their local communities. Find out more here www.iyf.ie

    Read the article

  • Book Review: Oracle ADF 11gR2 Development Beginner's Guide

    - by Grant Ronald
    Packt Publishing asked me to review Oracle ADF 11gR2 Development Beginner's Guide by Vinod Krishnan, so on a couple of long flights I managed to get through the book in a couple of sittings. One point to make clear before I go into the review.  Having authored "The Quick Start Guide to Fusion Development: JDeveloper and Oracle ADF", I've written a book which covers the same topic/beginner level.  I also think that its worth stating up front that I applaud anyone who has gone  through the effort of writing a technical book. So well done Vinod.  But on to the review: The book itself is a good break down of topic areas.  Vinod starts with a quick tour around the IDE, which is an important step given all the work you do will be through the IDE.  The book then goes through the general path that I tend to always teach: a quick overview demo, ADF BC, validation, binding, UI, task flows and then the various "add on" topics like security, MDS and advanced topics.  So it covers the right topics in, IMO, the right order.  I also think the writing style flows nicely as well - Its a relatively easy book to read, it doesn't get too formal and the "Have a go hero" hands on sections will be useful for many. That said, I did pick out a number of styles/themes to the writing that I found went against the idea of a beginners guide.  For example, in writing my book, I tried to carefully avoid talking about topics not yet covered or not yet relevant at that point in someone's learning.  So, if I was a new ADF developer reading this book, did I really need to know about ADFBindingFilter and DataBindings.cpx file on page 58 - I've only just learned how to do a drag and drop simple application so showing me XML configuration files relevant to JSF/ADF lifecycle is probably going to scare me off! I found this in a couple of places, for example, the security chapter starts on page 219 but by page 222 (and most of the preceding pages are hands-on steps) we're diving into the web.xml, weblogic.xml, adf-config.xml, jsp-config.xml and jazn-data.xml.  Don't get me wrong, I'm not saying you shouldn't know this, but I feel you have to get people on a strong grounding of the concepts before showing them implementation files.  If having just learned what ADF Security is will "The initialization parameter remove.anonymous.role is set to false for the JpsFilter filter as this filter is the first filter defined in the file" really going to help me? The other theme I found which I felt didn't work was that a couple of the chapters descended into a reference guide.  For example page 159 onwards basically lists UI components and their properties.  And page 87 onwards list the attributes of ADF BC in pretty much the same way as the on line help or developer guide, and I've a personal aversion to any sort of help that says pretty much what the attribute name is e.g. "Precision Rule: this option is used to set a strict precision rule", or "Property Set: this is the property set that has to be applied to the attribute". Hmmm, I think I could have worked that out myself, what I would want to know in a beginners guide are what are these for, what might I use them for...and if I don't need to use them to create an emp/dept example them maybe it’s better to leave them out. All that said, would the book help me - yes it would.  It’s obvious that Vinod knows ADF and his style is relatively easy going and the book covers all that it has to, but I think the book could have done a better job in the educational side of guiding beginners.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >