Search Results

Search found 7833 results on 314 pages for 'happy coding'.

Page 215/314 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Silverlight Cream for May 12, 2010 -- #860

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov(-2-), Mike Snow(-2-, -3-), Paul Sheriff, Fadi Abdelqader, Jeremy Likness, Marlon Grech, and Victor Gaudioso. Shoutouts: Andy Beaulieu has a cool WP7 game up and is looking for opinions/comments: Droppy Pop: A Windows Phone 7 Game Karl Shifflett has code and video tutorials up for the app he wrote for the WPF LOB tour he just did: Stuff – WPF Line of Business Using MVVM Video Tutorial From SilverlightCream.com: Flipping panels I had missed this 3rd part of the CompleteIT explanation. In this post Miroslav Miroslavov describes the page flipping they're doing. Great explanation and all the code included. Flying objects against you The 4th part of the CompleteIT explanation is blogged by Miroslav Miroslavov where he is discussing the screen elements 'flying toward' the user. Silverlight Tip of the Day #17 – Double Click Mike Snow's Tip of the Day 17 is showing how to implement mouse double-clicks either for an individual control or for an entire app. Silverlight Tip of the Day #18 – Elastic Scrolling In Mike Snow's Tip of the Day 18, he's talking about and showing some 'elastic' scrolling in his image viewer application. He's asking for opinions and suggestions. Silverlight Tip of the Day #19 – Using Bing Maps in Silverlight Mike Snow's Tips are getting more elaborate :) ... Number 19 is about using the BingMap control in your Silverlight app. Control to Control Binding in WPF/Silverlight Paul Sheriff demonstrates control to control binding... saving a bunch of code behind in the process. Project included. Your First Step to the Silverlight Voice/Video Chatting Client/Server Fadi Abdelqader has a post up at CodeProject using the WebCam and Mic features of Silverlight 4 to setup a voice & video chatting app. MVVM Coding by Convention (Convention over Configuration) Jeremy Likness discusses Convention over Configuration and gives up some good MVVM nuggets along the way... check out his nice long post and grab the source for the project too... and also check out the external links he has in there. MEFedMVVM changes >> from cool to cooler Marlon Grech has refactored MEFedMVVM, and in addition is working with other MVVM framework folks to use some of the same MEF techniques in theirs... code on CodePlex New Silverlight Video Tutorial: How to Create a Silverlight Paging System to Load new Pages In Victor Gaudioso's latest video tutorial he builds a ContentHolder UserControl that will load any page on demand into your MainPage.xaml Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Uninstalling with Ubuntu Software Center doesn't work on Ubuntu 12.04.1 64bit

    - by likethesky
    Not sure if I'm doing something wrong, or if the .deb package I'm installing is broken in some way (I've built it, using NetBeans 7.2), or if indeed this is a bug in Software Center. When I install this particular 32-bit .deb on Ubuntu 10.04 LTS--all updates applied--(where it was built), GDebi shows it and has an 'Uninstall' button next to it. So it works fine to uninstall it there, via the GDebi GUI. However, when I install it on 12.04.1 LTS--all updates applied--it installs fine, but then does not show up in Ubuntu Software Center as available to be uninstalled. No combination of searching finds it. However, I can from the command line, do sudo apt-get purge javafxapplication1 and it finds it and deletes it. The same thing happens when I build a 64-bit .deb and attempt to install it to the same (64-bit AMD) or a different 64-bit Ubuntu 12.04.1 system. So it seems to be isolated to this NetBeans-generated .deb and the 64-bit AMD build (though I haven't tried it on a 32-bit 12.04.1 install yet). These are all on VirtualBox VMs, btw, if that matters. Any way to clean up my Software Center and see if it's something I've done to get it in this state? Could this behavior be due to how this particular .deb has been built? (It doesn't have an 'Installed-Size' control field, so I do get the "Package is of bad quality" warning when I install it--which I do by clicking 'Ignore and install' button.) If you want all the gory details about why this happening--a bug has been reported against NetBeans for this behavior here: http://javafx-jira.kenai.com/browse/RT-25486 (EDIT: Just to be clear, the app installs fine, runs fine, all works as intended--I just can't get that 'bad package' message to go away, and now... I also can't uninstall it via Software Center, but rather, need to use sudo apt-get purge to uninstall it, after it installs.) Thanks for any pointers. I'm happy to report this as a bug against Ubuntu Software Center/Centre too, if that's what it seems to be, just tell me where to do so (a link). I'm a relative Ubuntu, NetBeans, and JavaFX newbie, though a long-time programmer. If I report it as a bug, I'll try it on the 32-bit build of 12.04.1 as well. Also, if I should add any more detail to the bug reported against NetBeans above, let me know--or feel free to add it yourself to the bug report above, if you would like. Thanks again!

    Read the article

  • What does the `dmesg` error: "composite sync not supported" mean?

    - by M. Tibbits
    Question: I see [ 20.473125] composite sync not supported and several such entries when I run dmesg. What do they mean? Background: I'm trying to debug a problem where my laptop won't suspend. Since acpi seems happy and I can suspend easily from the command line, I've turned to tracking down all boot-up errors/warnings. So I run dmesg | grep not and, amongst other shtuff, I get: 728:[ 17.267120] composite sync not supported 733:[ 18.009061] composite sync not supported 740:[ 18.159289] registered panic notifier 749:[ 18.162500] vga16fb: not registering due to another framebuffer present 757:[ 18.598251] composite sync not supported 776:[ 20.473125] composite sync not supported 777:[ 20.932266] composite sync not supported 778:[ 28.350231] composite sync not supported 779:[ 28.924913] composite sync not supported 780:[ 35.480658] composite sync not supported And the full log for the few lines right around that first appearance (line 728) is listed at the bottom of my post (I'd happily include anything else). Any ideas what could be causing this? I've read several sites: Ubuntuforums #1 IRC Chat #1 One post talks about ??Adobe flash?? causing this error? Some others also suggest that it might be an nvidia related problem, but I've got a Dell Latitude D630 with an integrated Intel graphics -- so nvidia isn't the problem. [ 17.207142] phy0: Selected rate control algorithm 'minstrel' [ 17.207833] Registered led device: b43-phy0::tx [ 17.207849] Registered led device: b43-phy0::rx [ 17.207865] Registered led device: b43-phy0::radio [ 17.207927] Broadcom 43xx driver loaded [ Features: PL, Firmware-ID: FW13 ] [ 17.267120] composite sync not supported [ 17.415795] EXT4-fs (sda2): mounted filesystem with ordered data mode [ 17.602131] [drm] initialized overlay support [ 17.620201] input: DualPoint Stick as /devices/platform/i8042/serio1/input/input7 [ 17.641192] input: AlpsPS/2 ALPS DualPoint TouchPad as /devices/platform/i8042/serio1/input/input8 [ 18.009061] composite sync not supported [ 18.106042] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3af: clean. [ 18.108115] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x3e0-0x4ff: clean. [ 18.108941] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x820-0x8ff: clean. [ 18.109676] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcf7: clean. [ 18.110356] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean. [ 18.159286] fb0: inteldrmfb frame buffer device [ 18.159289] registered panic notifier [ 18.160218] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:01/input/input9 [ 18.160286] ACPI: Video Device [VID1] (multi-head: yes rom: no post: no) [ 18.160334] ACPI Warning for \_SB_.PCI0.VID2._DOD: Return Package has no elements (empty) (20090903/nspredef-433) [ 18.160432] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:02/input/input10 [ 18.160491] ACPI: Video Device [VID2] (multi-head: yes rom: no post: no) [ 18.160539] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 18.162494] vga16fb: initializing [ 18.162497] vga16fb: mapped to 0xc00a0000 [ 18.162500] vga16fb: not registering due to another framebuffer present [ 18.176091] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 18.176123] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 18.285752] input: HDA Digital PCBeep as /devices/pci0000:00/0000:00:1b.0/input/input11 [ 18.312497] input: HDA Intel Mic at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 18.312586] input: HDA Intel HP Out at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input13 [ 18.328043] usbcore: registered new interface driver ndiswrapper [ 18.460909] Console: switching to colour frame buffer device 180x56 [ 18.598251] composite sync not supported

    Read the article

  • Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    - by Asian Angel
    Do you have certain folders that you access often each day but are only available through the Places Menu or Nautilus? See how easy it is to create shortcuts for your desktop and taskbar with our quick tutorial. To get started open Nautilus and locate the folders that you want to make new shortcuts for. For our example we chose Ubuntu One. Right click on the chosen folder and select Make Link. Your new shortcut will appear with the text Link to “Folder Name” and an Arrow Shortcut Marker attached. If you are happy with your new shortcut as is, then drag it to your desktop or taskbar as desired. We created the shortcut twice in our example…once for the desktop and once for the taskbar. For our example we decided to customize the taskbar shortcut a bit. To customize your shortcut right click on the shortcut and select Properties. Note: The desktop shortcut is limited on the amount you can customize it (name change and addition of up to four emblems to the folder). From here you can rename the shortcut and change the icon as desired. A quick name change and new icon made a huge improvement in how our taskbar shortcut looked. Note: The link for the icon we used is shown below. A little touch-up to our desktop shortcut and both are looking good. Download the Ubuntu Cloud Icon *Icon is 128*128 pixels and comes in .png format. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic]

    Read the article

  • Star rating not showing in rich snippets

    - by Danny R
    We've recently been doing a lot of work on our site's SEO (www.betterthanreviews.com). We recently did a push to update the rich snippets breadcrumb, meta description, and star rating. After giving Google some time to index the site, it has updated the breadcrumbs and meta descriptions for our review pages, but the stars are still not showing. This is currently how it appears on a Google search (link to the actual page: http://www.betterthanreviews.com/home-security/livewatch): This is what the Rich Snippets is supposed to look like, and how it appears in Google's testing tool: More context: As seen in our html, we are using schema.org language. We initially were using schema.org/Corporation for the site, but we now have the page labeled as schema.org/HomeAndConstructionBusiness because Google will not show star ratings for the Corporation language. However, in our Webmaster Tools, the Structured Data is still showing the Corporation language, which could be a potential issue. Here is a look at some of the coding that we used. But it can be looked at closer by inspecting the element: <div class="aggregate-rating" itemprop="aggregateRating" itemscope="" itemtype="http://schema.org/AggregateRating"> <div class="review row_fluid" itemprop="review" itemscope="" itemtype="http://schema.org/Review"> <div class="row_fluid rating" itemprop="reviewRating" itemscope="" itemtype="http://schema.org/Rating"> <meta content="4.5" itemprop="ratingValue" title="4.5 out of 5 stars" class="star-rating-readonly"> <meta content="2013-12-05" itemprop="datePublished"> <p class="review-headline" itemprop="headline">Way better than my previous system</p> <div> <p class="reviewer" itemprop="author">Scott H. </p> <span class="bullet">•</span> <p class="created_at">2 months ago</p> <p class="content" itemprop="description">I love it! The experience I have had so far is extremely positive. I had another alarm system before and I didn't like it but this one is really nice. I am telling everybody about it.</p> </div> </div> Any suggestions for how to fix this?

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): http://www.amazon.com/Introducing-Microsoft-Pro-Developer-David-Platt/dp/0735619182/ref=sr_1_1?s=books&ie=UTF8&qid=1296339237&sr=1-1 Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: http://www.amazon.com/Head-First-2E-Real-World-Programming/dp/1449380344/ref=sr_1_1?ie=UTF8&qid=1296339176&sr=8-1 Microsoft Visual C# 2008 Step by Step : http://www.amazon.com/Microsoft-Visual-2008-Step/dp/0735624305/ref=sr_1_1?s=books&ie=UTF8&qid=1296339208&sr=1-1 c The first place to start is at the official site for the certification. That’s here: http://www.microsoft.com/learning/en/us/Exam.aspx?ID=70-583&Locale=en-us c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Links: My Notes:

    Read the article

  • Repository and Ticket management in a Windows Environment

    - by saifkhan
    I’ve been using AxoSoft’s bug tracking application for a while, although and excellent piece of software I had some issues with it ·         It was SLOOOW (both desktop and web). I don’t care what Axosoft says, I tired multiple servers etc. I’ve been long enough in this field to tell you when something is not right with an app. ·         The cost! It’s not feasible for a small team.   I must say though, that they have some nice features which are not commonly found on other bug tracking software. I wouldn’t go on to list any here. I would prefer you download and try their app and see for yourself. In my quest to find a replacement, I tried a few. The successor had to satisfy the following ·         A 99.99% Windows Environment. ·         Bug Tracking. ·         Ticket Management (power users and project managers can open tickets on projects). ·         Repository (I decided to merge bug tracking and repository to get my team to be more productive). ·         Unlimited users. ·         Cost. Being the head of IT security for the firm I work for, making the decision to move data offsite was a hard decision to make, but turned out to be one I am not regretting so far. My choice was down to Altassian JIRA and codebaseHQ. I ended up going with the latter… (I still love the greenhopper from Altassian…its freaking cool!) CodebaseHQ is nice and simple and has all the features I needed. I’ve been using them for a few months now and very happy. Their pricing…well, see for yourself. I was also able to get our SVN data… (Yes, SVN! I don’t go near the Visual Sourcesafe thing…it’s not that safe (pardon the pun). I am hearing some nice things about TFS 2010) over to codebaseHQ. We use VisualSVN to access repositories. …so if you are a Windows developer (or team) codebaseHQ is worth checking out!

    Read the article

  • Where and how to mention Stackoverflow participation in the résumé?

    - by Sandeepan Nath
    I think I have good enough reputation on SO now - here is my profile - http://stackoverflow.com/users/351903/sandeepan-nath. Well, this may not be that much as compared to so many other users out there but I am happy with mine. So, I was thinking of adding my profile link on my résumé. (Just the profile link and not that "I have this much reputation on SO"). Those who haven't seen, can see this question Would you put your stackoverflow profile link on your CV / Resume?. How would this look like? Forums/Blogs/Miscellaneous others No blogging as yet but active participant in Stackoverflow. My profile link - http://stackoverflow.com/users/351903/sandeepan-nath I think of putting this section after Project Details and Technical Expertise sections. Any tips/advice? Thanks Update MKO has made a very good point - "do you really want a potential employeer to be able to evaluate in detail everything you've ever written on SO". I thought of commenting but it would be too long - In my questions/answers I put a lot of statements like - "AFAIK ...", "following are my assumptions so far ...", "am I correct to conclude that... ?", "I doubt if it is possible to ..." etc. when I am not sure about something and I rarely involve in fights with other users. However I do argue on topics sometimes if I feel it is necessary and if I have a valid point. I do accept my mistakes and apologize for the same. As we all know nobody is perfect. I must have written many things which may be judged as wrong by a potential employer. But what if the same employer notices that I have improved in the quality of content by comparing old content with new one? Isn't that great? I also try to go back to older questions/answers and put corrective comments etc. when I feel I was wrong or if I can improve my post. Of course there are many employers who want you (potential employees) to be correct each and every time. They immediately remove you from consideration when you say a single incorrect thing. I have personally met such an interviewer few months back. He didn't even care to listen to any good thing I had done after he found a single wrong thing. Now the question is do you really care to work with such people? Or do you like those people who give value to the fact that you are striving to improve every day. I personally prefer the latter.

    Read the article

  • Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?

    - by Jason McKenna
    I want to learn Flash Builder 4 (Flex) because I see so many jobs requesting experience with it. I also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro? It is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon? Who is happy with a drag-n-drop button? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It ain't that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everything I need it do anyways. Please help out here. If I just don't need to learn it, I don't want to waste the time.

    Read the article

  • Programmatically updating one update panel elements from another update panel elements

    - by Jalpesh P. Vadgama
    While taking interviews for asp.net candidate I am often asking this question but most peoples are not able to give this answer. So I decided to write a blog post about this. Here is the scenario. There are two update panels in my html code in first update panel there is textbox hello world and another update panel there is a button called btnHelloWorld. Now I want to update textbox text in button click event without post back. But in normal scenario It will not update the textbox text as both are in different update panel. Here is the code for that. <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" EnableCdn="true"></asp:ScriptManager> <asp:UpdatePanel ID="firstUpdatePanel" runat="server" UpdateMode="Conditional"> <ContentTemplate> <asp:TextBox ID="txtHelloWorld" runat="server"></asp:TextBox> </ContentTemplate> </asp:UpdatePanel> <asp:UpdatePanel ID="secondUpdatePanel" runat="server" UpdateMode="Conditional"> <ContentTemplate> <asp:Button ID="btnHelloWorld" runat="server" Text="Print Hello World" onclick="btnHelloWorld_Click" /> </ContentTemplate> </asp:UpdatePanel> </form> Here comes magic!!. Lots of people don’t know that update panel are providing the Update method from which we can programmatically update the update panel elements without post back. Below is code for that. protected void btnHelloWorld_Click(object sender, System.EventArgs e) { txtHelloWorld.Text = "Hello World!!!"; firstUpdatePanel.Update(); } That’s it here I have updated the firstUpdatePanel from the code!!!. Hope you liked it.. Stay tuned for more..Happy Programming.. Technorati Tags: UpdatePanel,ASP.NET

    Read the article

  • Does Your Customer Engagement Create an Ah Feeling?

    - by Richard Lefebvre
    An (Oracle CX Blog) article by Christina McKeon Companies that successfully engage customers all have one thing in common. They make it seem easy for the customer to get what they need. No one would argue that brands don’t want to leave customers with this “ah” feeling. Since 94% of customers who have a low-effort service experience will buy from that company again, it makes financial sense for brands.1 Some brands are thinking differently about how they engage their customers to create ah feelings. How do they do it? Toyota is a great example of using smart assistance technology to understand customer intent and answer questions before customers hit the submit button online. What is unique in this situation is that Toyota captures intent while customers are filling out email forms. Toyota analyzes the data in the form and suggests responses before the customer sends the email. The customer gets the right answer, and the email never makes it to your contact center — which makes you and the customer happy. Most brands are fully aware of chat as a service channel, but some brands take chat to a whole new level. Beauty.com, part of the drugstore.com and Walgreens family of brands, uses live chat to replicate the personal experience that one would find at high-end department store cosmetic counters. Trained beauty advisors, all with esthetician or beauty counter experience, engage in live chat sessions with online shoppers to share immediate advice on the best products for their personal needs. Agents can watch customer activity online and determine the right time to reach out and offer help, just as help would be offered in a brick-and-mortar store. And, agents can co-browse along with the customer helping customers with online check-out. These personal chat discussions also give Beauty.com the opportunity to present products, advertise promotions, and resolve customer issues when they arise. Beauty.com converts approximately 25% of chat sessions into product orders. Photobox, the European market leader in online photo services, wanted to deliver personal and responsive service to its 24 million members. It ensures customer inquiries on personalized photo products are routed based on agent knowledge so customers get what they need from the company experts. By using a queuing system to ensure that the agent with the most appropriate knowledge handles the query, agent productivity increased while response times to 1,500 customer queries per day decreased. A real-time dashboard prevents agents from being overloaded with queries. This approach has produced financial results with a 15% increase in sales to existing customers and a 45% increase in orders from newly referred customers.

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • ATI (fglrx) Dual monitor / laptop hot-plugging

    - by Brendan Piater
    I feel like I've gone back 5 years on my desktop today. I'll try not dump to much frustration here... I been running 12.04 since alpha with the ATI open source drivers and the gnome 3 desktop. I been generally very happy with them with only small issues along the way. Now of course it does not support 3D acceleration 100%, so games like my newly purchased Amnesia from the Humble bundle would not play. OK, no worries, the ATI driver is in the repos so let me have a go I thought. With all this testing that's been done with multi-monitor support, what could go wrong...? How I use my computer: It's laptop, with a HD 3670 card in it. I spend about 50% of the time working directly on the laptop (at home) and about 50% of the time working with an additional display connected (at work), multi desktop environment. What happening now: installed drivers things seemed to working, save some small other bugs (not critical) this morning I take my machine and plug the additional monitor into it, and nothing happens... ok fine. open "displays" try configure dual display, won't work open ati config "thing" (cause it is a thing, a crap thing) and set-up monitors there reboot it says (oh ffs, really.... ok) reboot, login and wow, I got a gnome 2 desktop (presume gnome 3 fall back) and no multi-monitor...great. (screenshot: http://ubuntuone.com/5tFe3QNFsTSIGvUSVLsyL7 ) after getting into a situation where I had to Ctrl + Alt + Del to get out of a frozen display, I eventually manage to set-up a single display desktop on the "main" monitor ok.. time to go home... unplug monitor... nothing happens.. oh boy here we go... try displays again, nothing, just hangs the display.. great. crash all the apps and reboot... So it's been a trying day... What I really hope is that someone else has figured out how to avoid this PAIN. Please help with a solution that: allows me run fglrx (so I can run the games I want) allows me to hot-plug a monitor to my laptop and remove it again allows me to change the display so include the hot-plugged monitor (preferable automatically like it did with the open drivers) Next best if that's not possible: switch between laptop only display and monitor only display easily (i.e. not having to reboot/logout/suspened etc) Really appreciate the time of anyone that has a solution. Thanks in advance. Regards Brendan PS: I guess I should file a bug about this too, so some direction as to the best place to file this would be appreciated too.

    Read the article

  • Ubuntu on Pandaboard giving me troubles

    - by Jeroen Jacobs
    I'm trying to install the OMAP4 extras for ubuntu on my pandaboard. For some reason, a few packages can't seem to be agree with eachother. This what I did so far: installed on Ubuntu 11.10 on sd card Powered on Pandaboard and let it finish it's initial install Did an "apt-get update" and "apt-get upgrade", to install updates So far, everything went fine, and I was quite happy with my Pandaboard, but then I made the mistake of typing this: apt-get install ubuntu-imap4-extras At first, everything seemed ok, and it started downloading and installing. But then after a while it just crashed. I tried it again but then it gave me this: Reading package lists... Done Building dependency tree Reading state information... Done ubuntu-omap4-extras is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: gstreamer0.10-openmax : Depends: gstreamer0.10-plugins-bad but it is not going to be installed gstreamer0.10-plugin-ducati : Depends: gstreamer0.10-plugins-bad but it is not going to be installed ubuntu-omap4-extras-multimedia : Depends: gstreamer0.10-plugins-bad (>= 0.10.22-2ubuntu4+ti1.5) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to the suggestion: apt-get -f install: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: gstreamer0.10-plugins-bad The following NEW packages will be installed: gstreamer0.10-plugins-bad 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 88 not fully installed or removed. Need to get 0 B/1,794 kB of archives. After this operation, 4,571 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 143575 files and directories currently installed.) Unpacking gstreamer0.10-plugins-bad (from .../gstreamer0.10-plugins bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb) ... dpkg: error processing /var/cache/apt/archives/gstreamer0.10-plugins-bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb (--unpack): trying to overwrite '/usr/lib/libgstbasecamerabinsrc-0.10.so.0.0.0', which is also in package gstreamer0.10-plugins-good 0.10.30-1ubuntu7.1 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gstreamer0.10-plugins-bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Seems like two packages (plugins-good and plugins-bad) are fighting over the same library. Any idea on how to fix this??

    Read the article

  • 2010 is gone and Welcome 2011

    - by anirudha
    last days i spent my week @ firozabadthe town is much small and near to agraso i never forget to see the taj mahal and red fort their even it’s first chance to see them.i make a plan that i go to Agra last Saturday. firstly i go to red fort and i talking with many foreigner and they love to talking with me because their is only one man who with with them who is their GUIDE a person like a  book they never can talk with you but tell you about everything of the location because you buy them. their are many person come from various country such as German , Japan,  Russ , Italy and many other. their is no problem to talk with them perhaps they happen with talk to me. when i completely watch the Red fort at least i see a girl who are look like a foreigner. i talk themselves where they come from they tell me Francewhen i go elsewhere i thing to propose them to be  a friend of mine. i never propose any girl for friendship with me even in school and college. so i propose them to be a friend of mine.  they accept it i put the email ID in their hand whenever they gone. but i still not get their mail. 2ndly i go to Taj mahal the taj experience is not so good i spent 3 or 4 hours in rush. i found their is no security even their are many army force. they all person are too slow to work. they spent 10 minute to check  a person for security . their hands work very slow just like a low configuration computer. i talk many person their too. i talk to a person who tell themselves Jacob and they from Chicago. they speak very fast and i not know what they tell in speech. a another problem i got with some Chinese person. when i talking with them that i found they speak only Chinese language. Wish you a very very happy new year.

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • links for 2011-01-04

    - by Bob Rhubart
    Webcasts (tags: ping.fm) Five Key Trends in Enterprise 2.0 for 2011 (Oracle Enterprise 2.0 Blog) Kellsey Ruppel shares insight from Oracle's Andy MacMillan. (tags: oracle otn enterprise2.0) Victor Bax: Lost in Service Oriented Architecture? "SOA is a concept, no more, no less. SOA is not a technology, or a piece of software. It is an architecture, a model." - Victor Bax (tags: oracle soa) Jan-Leendert: Oracle 11g SOA Suite read multi record data from csv file with the file adapter (master-detail) "The file adapter is a very powerlful tool to read files with structured data. Most of the time you will read simple csv files with one record per row. But what if your csv file contains multiple records with different types?" - Jan-Leendert (tags: oracle soa soasuite) @myfear: Five ways to know how your data looked in the past. Entity Auditing. "Whatever requirements you have. I can promise you, that it will never be a simple solution. In general it's best to evaluate your purpose for auditing in detail." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java) @fteter: Buffing Up The Crystal Ball "While I'm already tired of seeing these types of posts (I'm writing on New Year's Day), I'm also feeling guilty about not making my own set of predictions." - Oracle ACE Director Floyd Teter (tags: oracle otn oracleace ec2 cloud fusionmiddleware) @bex: ECM New Year's Resolutions "Happy new year! Most people use the first post of the year to go over their own blog statistics of popular posts... but since my blog's fiscal year ends in April, I decided to do new years resolutions instead." - Oracle ACE Director Bex Huff (tags: oracle otn oracleace ecm enterprise2.0) Izaak de Hullu: Embedded Java in a 11g BPEL process "In an earlier blog my colleague Peter Ebell explained how you can create an extension of com.collaxa.cube.engine.ext.BPELXExecLet to do your coding in a regular Java environment so you have code completion and validation..." - Izaak de Hullu (tags: oracle otn bpel java soa) @gschmutz: Cannot access EM console after installing SOA Suite 11g PS2 Oracle ACE Director Guido Schmutz encounters a problem and shares the solution. (tags: oracle otn oracleace soa soasuite)

    Read the article

  • Using lookahead assertions in regular expressions

    - by Greg Jackson
    I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very "interesting" and non-obvious behaviors. It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as "experimental features"). This isn't the case quite as much, but there's still always the question as to how well it's supported. Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer was as long as the difference is negligible or unimportant to the application as a whole. Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) "The right tool for the right job" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today). My question is this: Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code? I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me.

    Read the article

  • Where and how to mention Stackoverflow participation in the résumé?

    - by Sandeepan Nath
    I think I have good enough reputation on SO now - here is my profile - http://stackoverflow.com/users/351903/sandeepan-nath. Well, this may not be that much as compared to so many other users out there but I am happy with mine. So, I was thinking of adding my profile link on my résumé. (Just the profile link and not that "I have this much reputation on SO"). Those who haven't seen, can see this question Would you put your stackoverflow profile link on your CV / Resume?. How would this look like? Forums/Blogs/Miscellaneous others No blogging as yet but active participant in Stackoverflow. My profile link - http://stackoverflow.com/users/351903/sandeepan-nath I think of putting this section after Project Details and Technical Expertise sections. Any tips/advice? Thanks Update MKO has made a very good point - "do you really want a potential employeer to be able to evaluate in detail everything you've ever written on SO". I thought of commenting but it would be too long - In my questions/answers I put a lot of statements like - "AFAIK ...", "following are my assumptions so far ...", "am I correct to conclude that... ?", "I doubt if it is possible to ..." etc. when I am not sure about something and I rarely involve in fights with other users. However I do argue on topics sometimes if I feel it is necessary and if I have a valid point. I do accept my mistakes and apologize for the same. As we all know nobody is perfect. I must have written many things which may be judged as wrong by a potential employer. But what if the same employer notices that I have improved in the quality of content by comparing old content with new one? Isn't that great? I also try to go back to older questions/answers and put corrective comments etc. when I feel I was wrong or if I can improve my post. Of course there are many employers who want you (potential employees) to be correct each and every time. They immediately remove you from consideration when you say a single incorrect thing. I have personally met such an interviewer few months back. He didn't even care to listen to any good thing I had done after he found a single wrong thing. Now the question is do you really care to work with such people? Or do you like those people who give value to the fact that you are striving to improve every day. I personally prefer the latter.

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • What if you could work on anything you wanted?

    - by red@work
    This week we've downed our tools and organised ourselves into small project teams or struck out alone. We're working on whatever we like, with whoever we like, wherever we like. We've called it Down Tools week and so far it's a blast. It all started a few months ago with an idea from Neil, our CEO. Neil wanted to capture the excitement, innovation, and productivity of Coding by the Sea and extend this to all Red Gaters working in Product Development. A brainstorm is always a good place to start for an "anything goes" project. Half of Red Gate piled into our largest meeting room (it's pretty big) armed with flip charts, post its and a heightened sense of possibility. An hour or so later our SQL Servery walls were covered in project ideas. So what would you do, if you could work on anything you wanted? Many projects are related to tools we already make, others are for internal product development use and some are, well, just something completely different. Someone suggested we point a web cam at the SQL Servery lunch queue so we can check it before heading to lunch. That one couldn't wait for Down Tools Week. It was up and running within a few days and even better, it captures the table tennis table too. Thursday is the Show and Tell - I am looking forward to seeing what everyone has come up with. Some of the projects will turn into new products or features so this probably isn't the time or place to go into detail of what is being worked on. Rest assured, you'll hear all about it! We're making a video as we go along too which will be up on our website as soon. In the meantime, all meetings are cancelled, we've got plenty of food in and people are being very creative with the £500 expenses budget (Richard, do you really need an iPad?). It's brilliant to see it all coming together from the idea stage to reality. Catch up with our progress by following #downtoolsweek on Twitter. Who knows, maybe a future Red Gate flagship tool is coming to life right now? By the way, it's business as usual for our customer facing and internal operations teams. Hmm, maybe we can all down tools for a week and ask Product Development to hold the fort? Post by: Alice Chapman

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • ODI 11g - Cleaning control characters and User Functions

    - by David Allan
    In ODI user functions have a poor name really, they should be user expressions - a way of wrapping common expressions that you may wish to reuse many times - across many different technologies is an added bonus. To illustrate look at the problem of how to remove control characters from text. Users ask these types of questions over all technologies - Microsoft SQL Server, Oracle, DB2 and for many years - how do I clean a string, how do I tokenize a string and so on. After some searching around you will find a few ways of doing this, in Oracle there is a convenient way of using the TRANSLATE and REPLACE functions. So you can convert some text using the following SQL; replace( translate('This is my string'||chr(9)||' which has a control character', chr(3)||chr(4)||chr(5)||chr(9), chr(3) ), chr(3), '' ) If you had many columns to perform this kind of transformation on, in the Oracle database the natural solution you'd go to would be to code this as a PLSQL function since you don't want the code splattered everywhere. Someone tells you that there is another control character that needs added equals a maintenance headache. Coding it as a PLSQL function will incur a context switch between SQL and PLSQL which could prove costly. In ODI user functions let you capture this expression text and reference it many times across your mappings. This will protect the expression from being copy-pasted by developers and make maintenance much simpler - change the expression definition in one place. Firstly define a name and a syntax for the user function, I am calling it UF_STRIP_BAD_CHARACTERS and it has one parameter an input string;  We then can define an implementation for each technology we will use it, I will define Oracle's using the inputString parameter and the TRANSLATE and REPLACE functions with whatever control characters I want to replace; I can then use this inside mapping expressions in ODI, below I am cleaning the ENAME column - a fabricated example but you get the gist.  Note when I use the user function the function name remains in the text of the mapping, the actual expression is not substituted until I generate the scenario. If you generate the scenario and export the scenario you can have a peak at the code that is processed in the runtime - below you can see a snippet of my export scenario;  That's all for now, hopefully a useful snippet of info.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >