Search Results

Search found 7213 results on 289 pages for 'multi processor'.

Page 75/289 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Architecture - 32-bit handling 64-bit instructions

    - by tkoomzaaskz
    tomasz@tomasz-lenovo-ideapad-Y530:~$ lscpu Architecture: i686 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 6 CPU MHz: 2000.000 BogoMIPS: 4000.12 Cache L1d: 32K Cache L1i: 32K Cache L2: 3072K I can see that my architecture is 32-bit (i686). But CPU op-mode(s) are 32-bit and 64-bit. The question is: how come? How is it handled that a 32-bit processor performs 64-bit operations? I guess it's a lot slower than native 32-bit operations. Is it built-in processor functionality (to emulate being 64-bit) or is it software dependent? When does it make sense for a 32-bit processor to run 64-bit operations?

    Read the article

  • ubuntu 14.04 slow

    - by TURN A
    so i upgraded to ubuntu 14.04 from 12.04 with a usb but i have internet ,my computer is really slow at 1024x768 definition ,everything works super slow ,windows closing and opening and streaming videos ,everything ive used so far.but it works fine at 800x600 definition ,i want it to be fine at the higher definition ,how do i make it run well at 1024x768 ? in additional drivers nothing shows ,and my computer mirrors by default for some reason ,i tried stopping it from mirroring but most buttons dont want to work and weird glitches happen ,the system doesnt work well when not mirroring , i dont care if it mirrors or not i just want good performance .thank you in advance for any answers !! here are the computer specs Processor 1.8 GHz 8032 RAM 2 GB DDR3 Memory Speed 1066 MHz Hard Drive 32 GB Graphics Coprocessor Graphics Media Accelerator HD Wireless Type 802.11B, 802.11G, 802.11n Number of USB 2.0 Ports 4 Expand Other Technical Details Brand Name Asus Item model number EB1030-B003L Hardware Platform Linux Operating System Ubuntu Item Weight 1.5 pounds Item Dimensions L x W x H 1.14 x 6.70 x 8.60 inches Color Black Processor Brand Intel Processor Count 1 Computer Memory Type DDR3 SDRAM Flash Memory Size 32 Hard Drive Interface Solid State Optical Drive Type No

    Read the article

  • Why can't Perl's DBD::DB2 find dbivport.h during installation?

    - by Liju Mathew
    We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error: ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How do we fix this? Do we need root access to resolve this?

    Read the article

  • DBI::DBD package not getting installed for Perl?

    - by Liju Mathew
    Hi, We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error. ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How to fix this? Do we need root access to resolve this? Appreciate the help in advance. Thanks, Mathew Liju

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • How to install DBD::mysql on OS X server?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • jquery fade an element in when a link is clicked and then swap the element when another link is clic

    - by Nik
    I have worked out how to fade an element in: Click here to view the page If you click on the heading Posture 1 : Standing Deep Breathing : you will notice the element fades in as it should. If you now click on posture 2 you will see the element fades in below posture 1. I need to be able to swap posture 1 with posture 2. I have a total of 26 postures that all have images that need to fade in and then be swapped with another image when another heading is clicked. $(document).ready(function(){ $('#section_Q_01,#section_Q_02').hide(); $('h5.trigger#Q_01').click(function(){ $('#section_Q_01').fadeIn(2000) ; }); $('h5.trigger#Q_02').click(function(){ $('#section_Q_02').fadeIn(5000) ; }); }); and the html <div id="section_Q_01" class="01"> <div class="pics"> <img src="../images/multi/poses/pose1/Pranayama._01.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._02.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._03.jpg"/> </div> </div> <div id="section_Q_02" class="02"> <div class="pics"> <img src="../images/multi/poses/pose2/Half_Moon_Pose_04.jpg" /> <img src="../images/multi/poses/pose2/Backward_Bending_05.jpg" /> <img src="../images/multi/poses/pose2/Hands_to_Feet_Pose_06.jpg" /> </div> </div> I need to be able to swap a total of 26 elements #section_Q_01 - #section_Q_26 Any help appreciated

    Read the article

  • AxCMS.net 10 with Microsoft Silverlight 4 and Microsoft Visual Studio 2010

    - by Axinom
    Axinom, European WCM vendor, today announced the next version of its WCM solution AxCMS.net 10, which streamlines the processes involved in creating, managing and distributing corporate content on the internet. The new solution helps reducing ongoing costs for managing and distributing to large audiences, while at the same time drastically reducing time-to-market and one-time setup costs. http://www.AxCMS.net Axinom’s WCM portfolio, based on the Microsoft .NET Framework 4, Microsoft Visual Studio 2010 and Microsoft Silverlight 4, allows enterprises to increase process efficiency, reduce operating costs and more effectively manage delivery of rich media assets on the Web and mobile devices. Axinom solutions are widely used by major European online brands in IT, telco, retail, media and entertainment industries such as Siemens, American Express, Microsoft Corp., ZDF, Pro7Sat1 Media, and Deutsche Post. Brand New User Interface built with Silverlight 4By using Silverlight 4, Axinom’s team created a new user interface for AxCMS.net 10 that is optimized for improved usability and speed. WYSIWYG mode, integrated image editor, extended list views, and detail views of objects allow a substantial acceleration of typical editor tasks. Axinom’s team worked with Silverlight Rough Cut Editor for video management and Silverlight Analytics Framework for extended reporting to complete the wide range of capabilities included in the new release. “Axinom’s release of AxCMS.net 10 enables developers to take advantage of the latest features in Silverlight 4,” said Brian Goldfarb, director of the developer platform group at Microsoft Corp. “Microsoft is excited about the opportunity this creates for Web developers to streamline the creating, managing and distributing of online corporate content using AxCMS.net 10 and Silverlight.” Rapid Web Development with Visual Studio 2010AxCMS.net 10 is extended by additional products that enable developers to get productive quickly and help solve typical customer scenarios. AxCMS.net template projects come with documented source code that help kick-start projects and learn best practices in all aspects of Web application development. AxCMS.net overcomes many hard-to-solve technical obstacles in an out-of-the-box manner by providing a set of ready-to-use vertical solutions such as corporate Web site, Web shop, Web campaign management, email marketing, multi-channel distribution, management of rich Internet applications, and Web business intelligence. Extended Multi-Site ManagementAxCMS.net has been supporting the management of an unlimited number of Web sites for a long time. The new version 10 of AxCMS.net will further improve multi-site management and provide features to editors and developers that will simplify and accelerate multi-site and multi-language management. Extended publication workflow will take into account additional dependencies of dynamic objects, pages, and documents. “The customer requests evolved from static html pages to dynamic Web applications content with the emergence of rich media assets seamlessly combined across many channels including Web, mobile and IPTV. With the.NET Framework 4 and Silverlight 4, we’re on the fast track to making the three screen strategy a reality for our customers,” said Damir Tomicic, CEO of Axinom Group. “Our customers enjoy substantial competitive advantages of using latest Microsoft technologies. We have a long-standing, relationship with Microsoft and are committed to continued development using Microsoft tools and technologies to deliver innovative Web solutions in the future.”  

    Read the article

  • How do I install Citrix receiver?

    - by krondor
    Has anyone managed to get the Citrix receiver client working in 64-bit Natty (11.04). It seems libmotif4 won't install multi-arch (32 bit and 64 bit libraries). I also see crazy dependency errors despite the libraries being present. Here is what I received initially when trying to install icaclient.deb from Citrix; sudo dpkg -i Downloads/icaclient_11.100_i386.patched.deb dpkg: error processing Downloads/icaclient_11.100_i386.patched.deb (--install): package architecture (i386) does not match system (amd64) Errors were encountered while processing: Downloads/icaclient_11.100_i386.patched.deb I then installed the 32 bit libraries. sudo apt-get install ia32-libs ia32-libs-gtk Then I noticed libmotif4 (a dependency of the citrix client) wasn't present so I installed the 64bit version. sudo apt-get install libmotif4 I then tried to force the 32 bit version; sudo dpkg --force-architecture -i Downloads/libmotif4_2.3.3-5_i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package libmotif4:i386. dpkg: error processing Downloads/libmotif4_2.3.3-5_i386.deb (--install): libmotif4:i386 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:amd64 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed So I uninstalled the 64 bit version and tried to install the 32 bit version. This worked, but when I attempt to install Citrix I enter dependency hell. sudo dpkg --force-architecture -i Downloads/icaclient_11.100_i386.patched.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package icaclient:i386. (Reading database ... 183036 files and directories currently installed.) Unpacking icaclient:i386 (from .../icaclient_11.100_i386.patched.deb) ... dpkg: dependency problems prevent configuration of icaclient:i386: icaclient:i386 depends on libc6 (>= 2.3). icaclient:i386 depends on libice6 (>= 1:1.0.0). icaclient:i386 depends on libsm6. icaclient:i386 depends on libx11-6. icaclient:i386 depends on libxaw7. icaclient:i386 depends on libxext6. icaclient:i386 depends on libxmu6. icaclient:i386 depends on libxp6. icaclient:i386 depends on libxpm4. icaclient:i386 depends on libxt6. dpkg: error processing icaclient:i386 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: icaclient:i386 So the state it's in, there's no icaclient binaries installed only the docs. It is complaining about libraries that are indeed present (libc6 64 bit and 32 bit). libmotif4 is only 32 bit installed and won't install alongside libmotif4 64 bit. libmotif4 error when you try to install it alongside the 32bit instance; libmotif4:amd64 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:i386 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed Any tips?

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • Realtek RTL8111/8168B wired network doesn't work anymore

    - by Radar4002
    This sounds like it's a common problem upgrading 11.04, but I am having trouble finding a common solution, and one that will work for me. I just applied updates via the update manager and now my wired network connection is down. I know Ubuntu network settings is the issue, because I have a dual-boot with Win 7 and my network/internet is fine on Win 7. I don't know too much about networking, so what can I do to trouble shoot this issue? I can choose an older grub version, 2.6.38-8 instead of 2.6.38-11 and this does not resolve the issue. Here is my lspci result: 00:00.0 Host bridge: ATI Technologies Inc RD890 Northbridge only single slot PCI-e GFX Hydra part (rev 02) 00:02.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port B) 00:04.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port D) 00:05.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port E) 00:06.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port F) 00:07.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port G) 00:09.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port H) 00:0a.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (external gfx1 port A) 00:11.0 SATA controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] (rev 40) 00:12.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 41) 00:14.1 IDE interface: ATI Technologies Inc SB7x0/SB8x0/SB9x0 IDE Controller (rev 40) 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) (rev 40) 00:14.3 ISA bridge: ATI Technologies Inc SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge (rev 40) 00:14.5 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:15.0 PCI bridge: ATI Technologies Inc Device 43a0 00:16.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:16.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Link Control 01:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5700 Series] 01:00.1 Audio device: ATI Technologies Inc Juniper HDMI Audio [Radeon HD 5700 Series] 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) 05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 07:00.0 SATA controller: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 07:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 08:0e.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link) 09:00.0 SATA controller: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 02) 09:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 02) Here is my sudo lshw -class network: *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 03 serial: 6c:f0:49:e7:72:e8 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:40 ioport:9e00(size=256) memory:fceff000-fcefffff memory:fcef8000-fcefbfff memory:fce00000-fce1ffff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth1 version: 03 serial: 6c:f0:49:e7:72:ea size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:8e00(size=256) memory:fddff000-fddfffff memory:fddf8000-fddfbfff memory:fdd00000-fdd1ffff

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Question about POP3 message termination octet

    - by user361633
    This is from the POP3 RFC. "Responses to certain commands are multi-line. In these cases, which are clearly indicated below, after sending the first line of the response and a CRLF, any additional lines are sent, each terminated by a CRLF pair. When all lines of the response have been sent, a final line is sent, consisting of a termination octet (decimal code 046, ".") and a CRLF pair. If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF". When examining a multi-line response, the client checks to see if the line begins with the termination octet. If so and if octets other than CRLF follow, the first octet of the line (the termination octet) is stripped away. If so and if CRLF immediately follows the termination character, then the response from the POP server is ended and the line containing ".CRLF" is not considered part of the multi-line response." Well, i have problem with this, for example gmail sometimes sends the termination octet and then in the NEXT LINE sends the CRLF pair. For example: "+OK blah blah" "blah blah." "\r\n" That's very rare, but it happens sometimes, so obviously i'm unable to determine the end of the message in such case, because i'm expecting a line that consists of '.\r\n'. Seriously, is Gmail violating the POP3 protocol or i'm doing something wrong? Also i have a second question, english is not my first language so i cannot understand that completely: "If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF"." When exactly CRLF.CRLF is used? Can someone gives me a simple example? The rfc says that is used when any line of the response begins with the termination octet. But i don't see any lines that starts with '.' in the messages that are terminated with CRLF.CRLF. I checked that. Maybe i don't understand something, that's why i'm asking.

    Read the article

  • How do I set the encoding statement in the XML declaration when performing an XSL transformation usi

    - by aspiehler
    I wrote a simple package installer in WinBatch that needs to update an XML file with information about the package contents. My first stab at it involved loading the file with Msxml2.DOMDocument, adding nodes and data as required, then saving the data back to disk. This worked well enough, except that it would not create tab and CR/LF whitespace in the new data. The solution I came up with was writing an XSL stylesheet that would recreate the XML file with whitespace added back in. I'm doing this by: loading the XSL file into an Msxml2.FreeThreadedDOMDocument object setting that object as the stylesheet property of an Msxml2.XSLTemplate object creating an XSL processor via Msxml2.XSLTemplate.createProcessor() setting my original Msxml2.DOMDocument as the input property of the XSL processor Calling transform() method of the XSL processor, and saving the output to a file. This works as for as reformatting the XML file with tabs and carriage returns, but my XML declaration comes out either as <?xml version="1.0"?> or <?xml version="1.0" encoding="UTF-16"?> depending on whether I used Msxml2.*.6.0 or Msxml2.* objects (a fall back if the system doesn't have 6.0). If the encoding is set to UTF-16, Msxml12.DOMDocument complains about trying to convert UTF-16 to 1-byte encoding the next time I run my package installer. I've tried creating and adding an XML declaration using both createProcessingInstruction() to both the XML and XSL DOM objects, but neither one seems to affect the output of the XSLTemplate processor. I've also set encoding to UTF-8 in the <xsl:output/> tag in my XSL file. Here is the relevant code in my Winbatch script: xmlDoc = ObjectCreate("Msxml2.DOMDocument.6.0") if !xmlDoc then xmlDoc = ObjectCreate("Msxml2.DOMDocument") xmlDoc.async = @FALSE xmlDoc.validateOnParse = @TRUE xmlDoc.resolveExternals = @TRUE xmlDoc.preserveWhiteSpace = @TRUE xmlDoc.setProperty("SelectionLanguge", "XPath") xmlDoc.setProperty("SelectionNamespaces", "xmlns:fns='http://www.abc.com/f_namespace'") xmlDoc.load(xml_file_path) xslStyleSheet = ObjectCreate("Msxml2.FreeThreadedDOMDocument.6.0") if !xslStyleSheet then xslStyleSheet = ObjectCreate("Msxml2.FreeThreadedDOMDocument") xslStyleSheet.async = @FALSE xslStyleSheet.validateOnParse = @TRUE xslStyleSheet.load(xsl_style_sheet_path) xslTemplate = ObjectCreate("Msxml2.XSLTemplate.6.0") if !xslTemplate then xslTemplate = ObjectCreate("Msxml2.XSLTemplate") xslTemplate.stylesheet = xslStyleSheet processor = xslTemplate.createProcessor() processor.input = xmlDoc processor.transform() ; create a new file and write the XML processor output to it fh = FileOpen(output_file_path, "WRITE" , @FALSE) FileWrite(fh, processor.output) FileClose(fh) The style sheet, with some slight changes to protect the innocent: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.1"> <xsl:output method="xml" indent="yes" encoding="UTF-8"/> <xsl:template match="/"> <fns:test_station xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:fns="http://www.abc.com/f_namespace"> <xsl:for-each select="/fns:test_station/identification"> <xsl:text>&#x0A; </xsl:text> <identification> <xsl:for-each select="./*"> <xsl:text>&#x0A; </xsl:text> <xsl:copy-of select="."/> </xsl:for-each> <xsl:text>&#x0A; </xsl:text> </identification> </xsl:for-each> <xsl:for-each select="/fns:test_station/software"> <xsl:text>&#x0A; </xsl:text> <software> <xsl:for-each select="./package"> <xsl:text>&#x0A; </xsl:text> <package> <xsl:for-each select="./*"> <xsl:text>&#x0A; </xsl:text> <xsl:copy-of select="."/> </xsl:for-each> <xsl:text>&#x0A; </xsl:text> </package> </xsl:for-each> <xsl:text>&#x0A; </xsl:text> </software> </xsl:for-each> <xsl:for-each select="/fns:test_station/calibration"> <xsl:text>&#x0A; </xsl:text> <calibration> <xsl:for-each select="./item"> <xsl:text>&#x0A; </xsl:text> <item> <xsl:for-each select="./*"> <xsl:text>&#x0A; </xsl:text> <xsl:copy-of select="."/> </xsl:for-each> <xsl:text>&#x0A; </xsl:text> </item> </xsl:for-each> <xsl:text>&#x0A; </xsl:text> </calibration> </xsl:for-each> </fns:test_station> </xsl:template> </xsl:stylesheet> And this is a sample output file: <?xml version="1.0" encoding="UTF-16"?> <fns:test_station xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:fns="http://www.abc.com/f_namespace"> <software> <package> <part_number>123456789</part_number> <version>00</version> <test_category>1</test_category> <description>name of software package</description> <execution_path>c:\program files\test\test.exe</execution_path> <execution_arguments>arguments</execution_arguments> <crc_path>c:\ste_config\crc\123456789.lst</crc_path> <uninstall_path>c:\ste_config\uninstall\uninst_123456789.bat</uninstall_path> <install_timestamp>2009-11-09T14:00:44</install_timestamp> </package> </software> </fns:test_station>

    Read the article

  • Mac OS X 10.6 Setup for Apache/MySQL/Perl

    - by Russell C.
    I just got a new Mac and have been trying to setup a local development environment for my perl applications for a few days now with no luck. I'm getting no where fast so I hope someone else who has done this successfully could help. I started by installing MAMP which I thought would take care of everything for me but unfortunately it doesn't take care of some important perl modules. I used CPAN to install all our required modules except that it seems DBD::mysql doesn't install correctly through CPAN. After reading a lot online, lots of people reported problems with this and recommended using MacPorts to install the module which I have tried doing with no luck using the following command: sudo port install p5-dbd-mysql After what seems like a successful install of DBD::mysql, Apache continues to report the following error when trying to run any of our Perl scripts: [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at (eval 1835) line 3. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Perhaps the DBD::mysql perl module hasn't been fully installed, [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] or perhaps the capitalisation of 'mysql' isn't right. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge. I'm not sure where to go from here but my Mac isn't much of a development environment if Perl isn't able to talk to the database. I'd really appreciate any help and advice you might be able to provide in getting my system setup successfully. Thanks in advance!

    Read the article

  • Most efficient way of creating tree from adjacency list

    - by Jeff Meatball Yang
    I have an adjacency list of objects (rows loaded from SQL database with the key and it's parent key) that I need to use to build an unordered tree. It's guaranteed to not have cycles. This is taking wayyy too long (processed only ~3K out of 870K nodes in about 5 minutes). Running on my workstation Core 2 Duo with plenty of RAM. Any ideas on how to make this faster? public class StampHierarchy { private StampNode _root; private SortedList<int, StampNode> _keyNodeIndex; // takes a list of nodes and builds a tree // starting at _root private void BuildHierarchy(List<StampNode> nodes) { Stack<StampNode> processor = new Stack<StampNode>(); _keyNodeIndex = new SortedList<int, StampNode>(nodes.Count); // find the root _root = nodes.Find(n => n.Parent == 0); // find children... processor.Push(_root); while (processor.Count != 0) { StampNode current = processor.Pop(); // keep a direct link to the node via the key _keyNodeIndex.Add(current.Key, current); // add children current.Children.AddRange(nodes.Where(n => n.Parent == current.Key)); // queue the children foreach (StampNode child in current.Children) { processor.Push(child); nodes.Remove(child); // thought this might help the Where above } } } } public class StampNode { // properties: int Key, int Parent, string Name, List<StampNode> Children }

    Read the article

  • Using Xalan in Eclipse plugin

    - by Leslie Norman
    I am facing problems in using xalan in eclipse plugin. When I try to create factory instance by: TransformerFactory tFactory = TransformerFactory.newInstance("org.apache.xalan.processor.TransformerFactoryImpl", null); I get error: javax.xml.transform.TransformerFactoryConfigurationError: Provider org.apache.xalan.processor.TransformerFactoryImpl not found ... I have following lib jars in plugin classpath: xml-apis.jar, xercesImpl.jar, serializer.jar, xalan.jar I even can't create class instance by: c = Class.forName("org.apache.xalan.processor.TransformerFactoryImpl"); Object o = c.newInstance(); It returns error: java.lang.ClassNotFoundException: org.apache.xalan.processor.TransformerFactoryImpl But if I run same code outside eclipse plugin with same libs on classpath, it works fine. Could Somebody give an idea if I am doing some mistake or how to reolve this issue?

    Read the article

  • Understanding the output of ldd

    - by nebukadnezzar
    I'm having a hard time understanding the output of ldd - Especially the processor identifiers. The string in question is this one: Shortest.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, from ']', not stripped I have several questions about it: What does "ELF" mean? I know that's what Linux binaries are called like (Windows Binaries are called PE Binaries, "Portable Executable" Binaries), but isn't ELF an abbreviation for something? What does LSB mean? I can't even guess it... I see the string "Intel" there, now I seriously wonder about the portability of Linux binaries, as ldd seems to expect every binary to be compiled on a intel processor... but what if it wasn't compiled on a Intel processor? Or when I attempt to run the binary on a computer that doesn't run ontop of a Intel processor? Why the ']'? My guess is it should be some sort of Linker identify, but ']' doesn't look much like a Identifier... Thanks in advance

    Read the article

  • About x86 architecture assembly and others

    - by caramel1991
    I have the wisdom to learn assembly language,so I search through the internet for the information about this language,and came across some page telling that assembly is a low level native language and varied from one to another processor,so I just wonder,I'm currently running an intel based processor,I've no idea whether it is x86 or what,but I just wanna know,Does it possible for me to learn other processor arhchitecture assembly on my pc??Besides,is there any good books that could guide me through learning the intel architecture assembly.

    Read the article

  • About x86 architecture assembly and others

    - by caramel1991
    I have the wisdom to learn assembly language,so I search through the internet for the information about this language,and came across some page telling that assembly is a low level native language and varied from one to another processor,so I just wonder,I'm currently running an intel based processor,I've no idea whether it is x86 or what,but I just wanna know,Does it possible for me to learn other processor arhchitecture assembly on my pc??Besides,is there any good books that could guide me through learning the intel architecture assembly.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Restoring web session in struts2

    - by bozo
    Hi, I have a classical scenario of a website and payment gateway integration, where the request for payment is sent to payment processor, and the payment processor calls back my application once it's done with some parameters I passed to it in the original request. Among parameters, we pass jsessionid and we expect that when the remote server makes request to our server (via customer browser redirect to our server) that the session will be the same as the session used to send the initial payment request. This does not happen, we have two different sessions, although the payment processor includes our original jsessionid in the request to us (https://blabla/?jsessionid=something). How should we go about recreating a session in struts2, in the only thing that connects the 'OLD' and 'NEW' session is the jsessionid in the request URL? Any ideas? Is this possible at all or is the 'OLD' session data deleted when the user moves away from our server onto a completely different domain of a payment processor with their data-entry form? This would explain our innability to recreate the session. Thanks a lot for your replies.

    Read the article

  • Multiple plugin instance loading with MEF

    - by Dave
    In my last application, using MEF to load plugins went just fine, but now I'm running into a new issue. I have a solution for it that I explain at the end of this question, but I'm looking for other ways to do it. Let's say I have an interface called ApplianceInterface. I also have two plugins that inherit from ApplianceInterface, let's call them Blender and Processor. Now, I would like to have multiple Blenders and Processors in my application, but I am not sure how to instantiate them properly. Before, I would simply use the ImportMany attribute and upon calling ComposeParts, my application would load Blender and Processor. For example: [ImportMany(typeof(ApplianceInterface))] private IEnumerable<ApplianceInterface> Appliances { get; set; } and my Blender and Processor plugins would be attributed like this: [PartCreationPolicy(CreationPolicy.NonShared)] [Export(typeof(MyInterface)] public class Blender : ApplianceInterface { ... } but what this ends up doing for me is populating Appliances with one Blender and one Processor. I need to be able to create an arbitrary number of Blender and Processor objects. Now, from the documentation I understand that [PartCreationPolicy(CreationPolicy.NonShared)] is what allows MEF to create a new instance each time, but is there a similar "magical" way to create a specific number of instances of something using MEF? Up until now, I've relied on [Import] and [ImportMany] to resolve the assemblies. Is my only option to use a global container, and then resolve the export manually using GetExportedValue<? I have tried GetExportedValue< and that implementation does work fine for me, but I was just curious if there is a better, more accepted way to do it.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >