Search Results

Search found 8785 results on 352 pages for 'multi mark'.

Page 72/352 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Diving into the RichTextBox (Silverlight TV #31)

    Mark Rideout, Program Manager on the Silverlight product team, joins John to dive deep into many of the RichTextBox control's features. Mark has worked on the text aspects of Silverlight since the first version. Here are just a few of the areas that Mark covers: Overview of RichTextBox vs. TextBlock and TextBox for rich content Wire-up logic for applying formatting Inline UI elements Using text position to point for simple and complex operations   Basic "position...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Define outgoing ip address when using ssh

    - by Mark
    I have an ubuntu server machine (12.04) with 4 IP addresses for different websites that require unique ssl certificates. I sometimes ssh out from this box and the box I am going to I have to tell it what IP address I will be coming from. How do I specify which of the 4 ip addresses I want to use as my outgoing IP address? If i do an ifconfig it appears that I am going out as the last ipaddress. I guess you would want to specify either the address or the interface.... Thanks in advance! -Mark

    Read the article

  • 12.04 Server- No Such Partition After Adding HDD

    - by Mark
    12.04 server installed. Physically added a 1TB drive to system and I'm now getting: GRUB loading. error: no such partition. grub rescue> Any thoughts/suggestions? Mark EDIT: Once I create a partition on the new drive (with GParted from LiveCD), I get a blinking cursor at boot and nothing else. EDIT: Unplugged first drive and tried to install on 2nd (1TB v. 120GB). When creating partition I get Incorrect metadata area header checksum in virtual console(f4)

    Read the article

  • Why can't Perl's DBD::DB2 find dbivport.h during installation?

    - by Liju Mathew
    We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error: ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How do we fix this? Do we need root access to resolve this?

    Read the article

  • DBI::DBD package not getting installed for Perl?

    - by Liju Mathew
    Hi, We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error. ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How to fix this? Do we need root access to resolve this? Appreciate the help in advance. Thanks, Mathew Liju

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • How to install DBD::mysql on OS X server?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • jquery fade an element in when a link is clicked and then swap the element when another link is clic

    - by Nik
    I have worked out how to fade an element in: Click here to view the page If you click on the heading Posture 1 : Standing Deep Breathing : you will notice the element fades in as it should. If you now click on posture 2 you will see the element fades in below posture 1. I need to be able to swap posture 1 with posture 2. I have a total of 26 postures that all have images that need to fade in and then be swapped with another image when another heading is clicked. $(document).ready(function(){ $('#section_Q_01,#section_Q_02').hide(); $('h5.trigger#Q_01').click(function(){ $('#section_Q_01').fadeIn(2000) ; }); $('h5.trigger#Q_02').click(function(){ $('#section_Q_02').fadeIn(5000) ; }); }); and the html <div id="section_Q_01" class="01"> <div class="pics"> <img src="../images/multi/poses/pose1/Pranayama._01.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._02.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._03.jpg"/> </div> </div> <div id="section_Q_02" class="02"> <div class="pics"> <img src="../images/multi/poses/pose2/Half_Moon_Pose_04.jpg" /> <img src="../images/multi/poses/pose2/Backward_Bending_05.jpg" /> <img src="../images/multi/poses/pose2/Hands_to_Feet_Pose_06.jpg" /> </div> </div> I need to be able to swap a total of 26 elements #section_Q_01 - #section_Q_26 Any help appreciated

    Read the article

  • AxCMS.net 10 with Microsoft Silverlight 4 and Microsoft Visual Studio 2010

    - by Axinom
    Axinom, European WCM vendor, today announced the next version of its WCM solution AxCMS.net 10, which streamlines the processes involved in creating, managing and distributing corporate content on the internet. The new solution helps reducing ongoing costs for managing and distributing to large audiences, while at the same time drastically reducing time-to-market and one-time setup costs. http://www.AxCMS.net Axinom’s WCM portfolio, based on the Microsoft .NET Framework 4, Microsoft Visual Studio 2010 and Microsoft Silverlight 4, allows enterprises to increase process efficiency, reduce operating costs and more effectively manage delivery of rich media assets on the Web and mobile devices. Axinom solutions are widely used by major European online brands in IT, telco, retail, media and entertainment industries such as Siemens, American Express, Microsoft Corp., ZDF, Pro7Sat1 Media, and Deutsche Post. Brand New User Interface built with Silverlight 4By using Silverlight 4, Axinom’s team created a new user interface for AxCMS.net 10 that is optimized for improved usability and speed. WYSIWYG mode, integrated image editor, extended list views, and detail views of objects allow a substantial acceleration of typical editor tasks. Axinom’s team worked with Silverlight Rough Cut Editor for video management and Silverlight Analytics Framework for extended reporting to complete the wide range of capabilities included in the new release. “Axinom’s release of AxCMS.net 10 enables developers to take advantage of the latest features in Silverlight 4,” said Brian Goldfarb, director of the developer platform group at Microsoft Corp. “Microsoft is excited about the opportunity this creates for Web developers to streamline the creating, managing and distributing of online corporate content using AxCMS.net 10 and Silverlight.” Rapid Web Development with Visual Studio 2010AxCMS.net 10 is extended by additional products that enable developers to get productive quickly and help solve typical customer scenarios. AxCMS.net template projects come with documented source code that help kick-start projects and learn best practices in all aspects of Web application development. AxCMS.net overcomes many hard-to-solve technical obstacles in an out-of-the-box manner by providing a set of ready-to-use vertical solutions such as corporate Web site, Web shop, Web campaign management, email marketing, multi-channel distribution, management of rich Internet applications, and Web business intelligence. Extended Multi-Site ManagementAxCMS.net has been supporting the management of an unlimited number of Web sites for a long time. The new version 10 of AxCMS.net will further improve multi-site management and provide features to editors and developers that will simplify and accelerate multi-site and multi-language management. Extended publication workflow will take into account additional dependencies of dynamic objects, pages, and documents. “The customer requests evolved from static html pages to dynamic Web applications content with the emergence of rich media assets seamlessly combined across many channels including Web, mobile and IPTV. With the.NET Framework 4 and Silverlight 4, we’re on the fast track to making the three screen strategy a reality for our customers,” said Damir Tomicic, CEO of Axinom Group. “Our customers enjoy substantial competitive advantages of using latest Microsoft technologies. We have a long-standing, relationship with Microsoft and are committed to continued development using Microsoft tools and technologies to deliver innovative Web solutions in the future.”  

    Read the article

  • How do I install Citrix receiver?

    - by krondor
    Has anyone managed to get the Citrix receiver client working in 64-bit Natty (11.04). It seems libmotif4 won't install multi-arch (32 bit and 64 bit libraries). I also see crazy dependency errors despite the libraries being present. Here is what I received initially when trying to install icaclient.deb from Citrix; sudo dpkg -i Downloads/icaclient_11.100_i386.patched.deb dpkg: error processing Downloads/icaclient_11.100_i386.patched.deb (--install): package architecture (i386) does not match system (amd64) Errors were encountered while processing: Downloads/icaclient_11.100_i386.patched.deb I then installed the 32 bit libraries. sudo apt-get install ia32-libs ia32-libs-gtk Then I noticed libmotif4 (a dependency of the citrix client) wasn't present so I installed the 64bit version. sudo apt-get install libmotif4 I then tried to force the 32 bit version; sudo dpkg --force-architecture -i Downloads/libmotif4_2.3.3-5_i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package libmotif4:i386. dpkg: error processing Downloads/libmotif4_2.3.3-5_i386.deb (--install): libmotif4:i386 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:amd64 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed So I uninstalled the 64 bit version and tried to install the 32 bit version. This worked, but when I attempt to install Citrix I enter dependency hell. sudo dpkg --force-architecture -i Downloads/icaclient_11.100_i386.patched.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package icaclient:i386. (Reading database ... 183036 files and directories currently installed.) Unpacking icaclient:i386 (from .../icaclient_11.100_i386.patched.deb) ... dpkg: dependency problems prevent configuration of icaclient:i386: icaclient:i386 depends on libc6 (>= 2.3). icaclient:i386 depends on libice6 (>= 1:1.0.0). icaclient:i386 depends on libsm6. icaclient:i386 depends on libx11-6. icaclient:i386 depends on libxaw7. icaclient:i386 depends on libxext6. icaclient:i386 depends on libxmu6. icaclient:i386 depends on libxp6. icaclient:i386 depends on libxpm4. icaclient:i386 depends on libxt6. dpkg: error processing icaclient:i386 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: icaclient:i386 So the state it's in, there's no icaclient binaries installed only the docs. It is complaining about libraries that are indeed present (libc6 64 bit and 32 bit). libmotif4 is only 32 bit installed and won't install alongside libmotif4 64 bit. libmotif4 error when you try to install it alongside the 32bit instance; libmotif4:amd64 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:i386 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed Any tips?

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • Question about POP3 message termination octet

    - by user361633
    This is from the POP3 RFC. "Responses to certain commands are multi-line. In these cases, which are clearly indicated below, after sending the first line of the response and a CRLF, any additional lines are sent, each terminated by a CRLF pair. When all lines of the response have been sent, a final line is sent, consisting of a termination octet (decimal code 046, ".") and a CRLF pair. If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF". When examining a multi-line response, the client checks to see if the line begins with the termination octet. If so and if octets other than CRLF follow, the first octet of the line (the termination octet) is stripped away. If so and if CRLF immediately follows the termination character, then the response from the POP server is ended and the line containing ".CRLF" is not considered part of the multi-line response." Well, i have problem with this, for example gmail sometimes sends the termination octet and then in the NEXT LINE sends the CRLF pair. For example: "+OK blah blah" "blah blah." "\r\n" That's very rare, but it happens sometimes, so obviously i'm unable to determine the end of the message in such case, because i'm expecting a line that consists of '.\r\n'. Seriously, is Gmail violating the POP3 protocol or i'm doing something wrong? Also i have a second question, english is not my first language so i cannot understand that completely: "If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF"." When exactly CRLF.CRLF is used? Can someone gives me a simple example? The rfc says that is used when any line of the response begins with the termination octet. But i don't see any lines that starts with '.' in the messages that are terminated with CRLF.CRLF. I checked that. Maybe i don't understand something, that's why i'm asking.

    Read the article

  • Dependency Injection Introduction

    - by MarkPearl
    I recently was going over a great book called “Dependency Injection in .Net” by Mark Seeman. So far I have really enjoyed the book and would recommend anyone looking to get into DI to give it a read. Today I thought I would blog about the first example Mark gives in his book to illustrate some of the benefits that DI provides. The ones he lists are Late binding Extensibility Parallel Development Maintainability Testability To illustrate some of these benefits he gives a HelloWorld example using DI that illustrates some of the basic principles. It goes something like this… class Program { static void Main(string[] args) { var writer = new ConsoleMessageWriter(); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } } public interface IMessageWriter { void Write(string message); } public class ConsoleMessageWriter : IMessageWriter { public void Write(string message) { Console.WriteLine(message); } } public class Salutation { private readonly IMessageWriter _writer; public Salutation(IMessageWriter writer) { _writer = writer; } public void Exclaim() { _writer.Write("Hello World"); } }   If you had asked me a few years ago if I had thought this was a good approach to solving the HelloWorld problem I would have resounded “No”. How could the above be better than the following…. class Program { static void Main(string[] args) { Console.WriteLine("Hello World"); Console.ReadLine(); } }  Today, my mind-set has changed because of the pain of past programs. So often we can look at a small snippet of code and make judgements when we need to keep in mind that we will most probably be implementing these patterns in projects with hundreds of thousands of lines of code and in projects that we have tests that we don’t want to break and that’s where the first solution outshines the latter. Let’s see if the first example achieves some of the outcomes that were listed as benefits of DI. Could I test the first solution easily? Yes… We could write something like the following using NUnit and RhinoMocks… [TestFixture] public class SalutationTests { [Test] public void ExclaimWillWriteCorrectMessageToMessageWriter() { var writerMock = MockRepository.GenerateMock<IMessageWriter>(); var sut = new Salutation(writerMock); sut.Exclaim(); writerMock.AssertWasCalled(x => x.Write("Hello World")); } }   This would test the existing code fine. Let’s say we then wanted to extend the original solution so that we had a secure message writer. We could write a class like the following… public class SecureMessageWriter : IMessageWriter { private readonly IMessageWriter _writer; private readonly string _secretPassword; public SecureMessageWriter(IMessageWriter writer, string secretPassword) { _writer = writer; _secretPassword = secretPassword; } public void Write(string message) { if (_secretPassword == "Mark") { _writer.Write(message); } else { _writer.Write("Unauthenticated"); } } }   And then extend our implementation of the program as follows… class Program { static void Main(string[] args) { var writer = new SecureMessageWriter(new ConsoleMessageWriter(), "Mark"); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } }   Our application has now been successfully extended and yet we did very little code change. In addition, our existing tests did not break and we would just need add tests for the extended functionality. Would this approach allow parallel development? Well, I am in two camps on parallel development but with some planning ahead of time it would allow for it as you would simply need to decide on the interface signature and could then have teams develop different sections programming to that interface. So,this was really just a quick intro to some of the basic concepts of DI that Mark introduces very successfully in his book. I am hoping to blog about this further as I continue through the book to list some of the more complex implementations of containers.

    Read the article

  • Mac OS X 10.6 Setup for Apache/MySQL/Perl

    - by Russell C.
    I just got a new Mac and have been trying to setup a local development environment for my perl applications for a few days now with no luck. I'm getting no where fast so I hope someone else who has done this successfully could help. I started by installing MAMP which I thought would take care of everything for me but unfortunately it doesn't take care of some important perl modules. I used CPAN to install all our required modules except that it seems DBD::mysql doesn't install correctly through CPAN. After reading a lot online, lots of people reported problems with this and recommended using MacPorts to install the module which I have tried doing with no luck using the following command: sudo port install p5-dbd-mysql After what seems like a successful install of DBD::mysql, Apache continues to report the following error when trying to run any of our Perl scripts: [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at (eval 1835) line 3. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Perhaps the DBD::mysql perl module hasn't been fully installed, [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] or perhaps the capitalisation of 'mysql' isn't right. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge. I'm not sure where to go from here but my Mac isn't much of a development environment if Perl isn't able to talk to the database. I'd really appreciate any help and advice you might be able to provide in getting my system setup successfully. Thanks in advance!

    Read the article

  • Codeigniter benchmarking, where are these ms coming from?

    - by ropstah
    I'm in the process of benchmarking my website. class Home extends Controller { function Home() { parent::Controller(); $this->benchmark->mark('Constructor_start'); $this->output->enable_profiler(TRUE); $this->load->library ('MasterPage'); $this->benchmark->mark('Constructor_end'); } function index() { $this->benchmark->mark('Index_start'); $this->masterpage->setMasterPage('master/home'); $this->masterpage->addContent('home/index', 'page'); $this->masterpage->show(); $this->benchmark->mark('Index_start'); } } These are the results: Loading Time Base Classes: 0.0076 Constructor: 0.0007 Index: 0.0440 Controller Execution Time ( Home/ Index ): 0.4467 Total Execution Time: 0.4545` I understand the following: Loading Time Base Classes (0.0076) Constructor (0.0007) Index (0.0440) But where is the rest of the time coming from?

    Read the article

  • how to add annotation to map in xcode

    - by somu
    hai to all... i am trying to mark some places on map from xcode(iphone app) but now i can mark the only 1 place at a time ..i want to mark the so many places at a time with titles for them could any one help me its urgent thanks

    Read the article

  • C# Printing Properties

    - by Mark
    I have a class like this with a bunch of properties: class ClassName { string Name {get; set;} int Age {get; set;} DateTime BirthDate {get; set;} } I would like to print the name of the property and it's value using the value's ToString() method and the Property's name like this: ClassName cn = new ClassName() {Name = "Mark", Age = 428, BirthData = DateTime.Now} cn.MethodToPrint(); // Output // Name = Mark, Age = 428, BirthDate = 12/30/2010 09:20:23 PM Reflection is perfectly okay, in fact I think it is probably required. I'd also be neat if it could somehow work on any class through some sort of inheritance. I'm using 4.0 if that matters.

    Read the article

  • LevelToVisibilityConverter in silverligt 4

    - by prince23
    <UserControl x:Class="SLGridImage.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400" xmlns:sdk="http://schemas.microsoft.com/winfx/2006/xaml/presentation/sdk"> <UserControl.Resources> <local:LevelToVisibilityConverter x:Key="LevelToVisibility" /> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <sdk:DataGrid x:Name="dgMarks" CanUserResizeColumns="False" SelectionMode="Single" AutoGenerateColumns="False" VerticalAlignment="Top" ItemsSource="{Binding MarkCollection}" IsReadOnly="True" Margin="13,44,0,0" RowDetailsVisibilityMode="Collapsed" Height="391" HorizontalAlignment="Left" Width="965" VerticalScrollBarVisibility="Visible" > <sdk:DataGrid.Columns> <sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button x:Name="myButton" Click="myButton_Click"> <StackPanel Orientation="Horizontal"> <Image Margin="2, 2, 2, 2" x:Name="imgMarks" Stretch="Fill" Width="12" Height="12" Source="Images/test.png" VerticalAlignment="Center" HorizontalAlignment="Center" Visibility="{Binding Level, Converter={StaticResource LevelToVisibility}}" /> <TextBlock Text="{Binding Level}" TextWrapping="NoWrap" ></TextBlock> </StackPanel> </Button> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn Header="Name" > <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate > <Border> <TextBlock Text="{Binding Name}" /> </Border> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn Header="Marks" Width="80"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Border> <TextBlock Text="{Binding Marks}" /> </Border> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> </sdk:DataGrid.Columns> </sdk:DataGrid> </Grid> </UserControl> in .cs using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Collections.ObjectModel; using System.ComponentModel; namespace SLGridImage { public partial class MainPage : UserControl { private MarksViewModel model = new MarksViewModel(); public MainPage() { InitializeComponent(); this.DataContext = model; } private void myButton_Click(object sender, RoutedEventArgs e) { } } public class MarksViewModel : INotifyPropertyChanged { public MarksViewModel() { markCollection.Add(new Mark() { Name = "ABC", Marks = 23, Level = 0 }); markCollection.Add(new Mark() { Name = "XYZ", Marks = 67, Level = 1 }); markCollection.Add(new Mark() { Name = "YU", Marks = 56, Level = 0 }); markCollection.Add(new Mark() { Name = "AAA", Marks = 89, Level = 1 }); } private ObservableCollection<Mark> markCollection = new ObservableCollection<Mark>(); public ObservableCollection<Mark> MarkCollection { get { return this.markCollection; } set { this.markCollection = value; OnPropertyChanged("MarkCollection"); } } public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string propName) { if (PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(propName)); } } public class Mark { public string Name { get; set; } public int Marks { get; set; } public int Level { get; set; } } public class LevelToVisibilityConverter : System.Windows.Data.IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Visibility isVisible = Visibility.Collapsed; if ((value == null)) return isVisible; int condition = (int)value; isVisible = condition == 1 ? Visibility.Visible : Visibility.Collapsed; return isVisible; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } } when i run getting error The type 'local:LevelToVisibilityConverter' was not found. Verify that you are not missing an assembly reference and that all referenced assemblies have been built. what i am i missing here looking forward for an solution thank you

    Read the article

  • JavaScript function binding (this keyword) is lost after assignment

    - by Ding
    this is one of most mystery feature in JavaScript, after assigning the object method to other variable, the binding (this keyword) is lost var john = { name: 'John', greet: function(person) { alert("Hi " + person + ", my name is " + this.name); } }; john.greet("Mark"); // Hi Mark, my name is John var fx = john.greet; fx("Mark"); // Hi Mark, my name is my question is: 1) what is happening behind the assignment? var fx = john.greet; is this copy by value or copy by reference? fx and john.greet point to two diferent function, right? 2) since fx is a global method, the scope chain contains only global object. what is the value of this property in Variable object?

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Inserting newlines into a GtkTextView widget (GTK+ programming)

    - by Mark Roberts
    I've got a button which when clicked copies and appends the text from a GtkEntry widget into a GtkTextView widget. (This code is a modified version of an example found in the "The Text View Widget" chapter of Foundations of GTK+ Development.) I'm looking to insert a newline character before the text which gets copied and appended, such that each line of text will be on its own line in the GtkTextView widget. How would I do this? I'm brand new to GTK+. Here's the code sample: #include <gtk/gtk.h> typedef struct { GtkWidget *entry, *textview; } Widgets; static void insert_text (GtkButton*, Widgets*); int main (int argc, char *argv[]) { GtkWidget *window, *scrolled_win, *hbox, *vbox, *insert; Widgets *w = g_slice_new (Widgets); gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_window_set_title (GTK_WINDOW (window), "Text Iterators"); gtk_container_set_border_width (GTK_CONTAINER (window), 10); gtk_widget_set_size_request (window, -1, 200); w->textview = gtk_text_view_new (); w->entry = gtk_entry_new (); insert = gtk_button_new_with_label ("Insert Text"); g_signal_connect (G_OBJECT (insert), "clicked", G_CALLBACK (insert_text), (gpointer) w); scrolled_win = gtk_scrolled_window_new (NULL, NULL); gtk_container_add (GTK_CONTAINER (scrolled_win), w->textview); hbox = gtk_hbox_new (FALSE, 5); gtk_box_pack_start_defaults (GTK_BOX (hbox), w->entry); gtk_box_pack_start_defaults (GTK_BOX (hbox), insert); vbox = gtk_vbox_new (FALSE, 5); gtk_box_pack_start (GTK_BOX (vbox), scrolled_win, TRUE, TRUE, 0); gtk_box_pack_start (GTK_BOX (vbox), hbox, FALSE, TRUE, 0); gtk_container_add (GTK_CONTAINER (window), vbox); gtk_widget_show_all (window); gtk_main(); return 0; } /* Insert the text from the GtkEntry into the GtkTextView. */ static void insert_text (GtkButton *button, Widgets *w) { GtkTextBuffer *buffer; GtkTextMark *mark; GtkTextIter iter; const gchar *text; buffer = gtk_text_view_get_buffer (GTK_TEXT_VIEW (w->textview)); text = gtk_entry_get_text (GTK_ENTRY (w->entry)); mark = gtk_text_buffer_get_insert (buffer); gtk_text_buffer_get_iter_at_mark (buffer, &iter, mark); gtk_text_buffer_insert (buffer, &iter, text, -1); } You can compile this command (assuming the file is named file.c): gcc file.c -o file `pkg-config --cflags --libs gtk+-2.0` Thanks everybody!

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >