Search Results

Search found 4646 results on 186 pages for 'multi'.

Page 61/186 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • AxCMS.net 10 with Microsoft Silverlight 4 and Microsoft Visual Studio 2010

    - by Axinom
    Axinom, European WCM vendor, today announced the next version of its WCM solution AxCMS.net 10, which streamlines the processes involved in creating, managing and distributing corporate content on the internet. The new solution helps reducing ongoing costs for managing and distributing to large audiences, while at the same time drastically reducing time-to-market and one-time setup costs. http://www.AxCMS.net Axinom’s WCM portfolio, based on the Microsoft .NET Framework 4, Microsoft Visual Studio 2010 and Microsoft Silverlight 4, allows enterprises to increase process efficiency, reduce operating costs and more effectively manage delivery of rich media assets on the Web and mobile devices. Axinom solutions are widely used by major European online brands in IT, telco, retail, media and entertainment industries such as Siemens, American Express, Microsoft Corp., ZDF, Pro7Sat1 Media, and Deutsche Post. Brand New User Interface built with Silverlight 4By using Silverlight 4, Axinom’s team created a new user interface for AxCMS.net 10 that is optimized for improved usability and speed. WYSIWYG mode, integrated image editor, extended list views, and detail views of objects allow a substantial acceleration of typical editor tasks. Axinom’s team worked with Silverlight Rough Cut Editor for video management and Silverlight Analytics Framework for extended reporting to complete the wide range of capabilities included in the new release. “Axinom’s release of AxCMS.net 10 enables developers to take advantage of the latest features in Silverlight 4,” said Brian Goldfarb, director of the developer platform group at Microsoft Corp. “Microsoft is excited about the opportunity this creates for Web developers to streamline the creating, managing and distributing of online corporate content using AxCMS.net 10 and Silverlight.” Rapid Web Development with Visual Studio 2010AxCMS.net 10 is extended by additional products that enable developers to get productive quickly and help solve typical customer scenarios. AxCMS.net template projects come with documented source code that help kick-start projects and learn best practices in all aspects of Web application development. AxCMS.net overcomes many hard-to-solve technical obstacles in an out-of-the-box manner by providing a set of ready-to-use vertical solutions such as corporate Web site, Web shop, Web campaign management, email marketing, multi-channel distribution, management of rich Internet applications, and Web business intelligence. Extended Multi-Site ManagementAxCMS.net has been supporting the management of an unlimited number of Web sites for a long time. The new version 10 of AxCMS.net will further improve multi-site management and provide features to editors and developers that will simplify and accelerate multi-site and multi-language management. Extended publication workflow will take into account additional dependencies of dynamic objects, pages, and documents. “The customer requests evolved from static html pages to dynamic Web applications content with the emergence of rich media assets seamlessly combined across many channels including Web, mobile and IPTV. With the.NET Framework 4 and Silverlight 4, we’re on the fast track to making the three screen strategy a reality for our customers,” said Damir Tomicic, CEO of Axinom Group. “Our customers enjoy substantial competitive advantages of using latest Microsoft technologies. We have a long-standing, relationship with Microsoft and are committed to continued development using Microsoft tools and technologies to deliver innovative Web solutions in the future.”  

    Read the article

  • How do I install Citrix receiver?

    - by krondor
    Has anyone managed to get the Citrix receiver client working in 64-bit Natty (11.04). It seems libmotif4 won't install multi-arch (32 bit and 64 bit libraries). I also see crazy dependency errors despite the libraries being present. Here is what I received initially when trying to install icaclient.deb from Citrix; sudo dpkg -i Downloads/icaclient_11.100_i386.patched.deb dpkg: error processing Downloads/icaclient_11.100_i386.patched.deb (--install): package architecture (i386) does not match system (amd64) Errors were encountered while processing: Downloads/icaclient_11.100_i386.patched.deb I then installed the 32 bit libraries. sudo apt-get install ia32-libs ia32-libs-gtk Then I noticed libmotif4 (a dependency of the citrix client) wasn't present so I installed the 64bit version. sudo apt-get install libmotif4 I then tried to force the 32 bit version; sudo dpkg --force-architecture -i Downloads/libmotif4_2.3.3-5_i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package libmotif4:i386. dpkg: error processing Downloads/libmotif4_2.3.3-5_i386.deb (--install): libmotif4:i386 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:amd64 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed So I uninstalled the 64 bit version and tried to install the 32 bit version. This worked, but when I attempt to install Citrix I enter dependency hell. sudo dpkg --force-architecture -i Downloads/icaclient_11.100_i386.patched.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package icaclient:i386. (Reading database ... 183036 files and directories currently installed.) Unpacking icaclient:i386 (from .../icaclient_11.100_i386.patched.deb) ... dpkg: dependency problems prevent configuration of icaclient:i386: icaclient:i386 depends on libc6 (>= 2.3). icaclient:i386 depends on libice6 (>= 1:1.0.0). icaclient:i386 depends on libsm6. icaclient:i386 depends on libx11-6. icaclient:i386 depends on libxaw7. icaclient:i386 depends on libxext6. icaclient:i386 depends on libxmu6. icaclient:i386 depends on libxp6. icaclient:i386 depends on libxpm4. icaclient:i386 depends on libxt6. dpkg: error processing icaclient:i386 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: icaclient:i386 So the state it's in, there's no icaclient binaries installed only the docs. It is complaining about libraries that are indeed present (libc6 64 bit and 32 bit). libmotif4 is only 32 bit installed and won't install alongside libmotif4 64 bit. libmotif4 error when you try to install it alongside the 32bit instance; libmotif4:amd64 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:i386 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed Any tips?

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • Question about POP3 message termination octet

    - by user361633
    This is from the POP3 RFC. "Responses to certain commands are multi-line. In these cases, which are clearly indicated below, after sending the first line of the response and a CRLF, any additional lines are sent, each terminated by a CRLF pair. When all lines of the response have been sent, a final line is sent, consisting of a termination octet (decimal code 046, ".") and a CRLF pair. If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF". When examining a multi-line response, the client checks to see if the line begins with the termination octet. If so and if octets other than CRLF follow, the first octet of the line (the termination octet) is stripped away. If so and if CRLF immediately follows the termination character, then the response from the POP server is ended and the line containing ".CRLF" is not considered part of the multi-line response." Well, i have problem with this, for example gmail sometimes sends the termination octet and then in the NEXT LINE sends the CRLF pair. For example: "+OK blah blah" "blah blah." "\r\n" That's very rare, but it happens sometimes, so obviously i'm unable to determine the end of the message in such case, because i'm expecting a line that consists of '.\r\n'. Seriously, is Gmail violating the POP3 protocol or i'm doing something wrong? Also i have a second question, english is not my first language so i cannot understand that completely: "If any line of the multi-line response begins with the termination octet, the line is "byte-stuffed" by pre-pending the termination octet to that line of the response. Hence a multi-line response is terminated with the five octets "CRLF.CRLF"." When exactly CRLF.CRLF is used? Can someone gives me a simple example? The rfc says that is used when any line of the response begins with the termination octet. But i don't see any lines that starts with '.' in the messages that are terminated with CRLF.CRLF. I checked that. Maybe i don't understand something, that's why i'm asking.

    Read the article

  • Mac OS X 10.6 Setup for Apache/MySQL/Perl

    - by Russell C.
    I just got a new Mac and have been trying to setup a local development environment for my perl applications for a few days now with no luck. I'm getting no where fast so I hope someone else who has done this successfully could help. I started by installing MAMP which I thought would take care of everything for me but unfortunately it doesn't take care of some important perl modules. I used CPAN to install all our required modules except that it seems DBD::mysql doesn't install correctly through CPAN. After reading a lot online, lots of people reported problems with this and recommended using MacPorts to install the module which I have tried doing with no luck using the following command: sudo port install p5-dbd-mysql After what seems like a successful install of DBD::mysql, Apache continues to report the following error when trying to run any of our Perl scripts: [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at (eval 1835) line 3. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Perhaps the DBD::mysql perl module hasn't been fully installed, [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] or perhaps the capitalisation of 'mysql' isn't right. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge. I'm not sure where to go from here but my Mac isn't much of a development environment if Perl isn't able to talk to the database. I'd really appreciate any help and advice you might be able to provide in getting my system setup successfully. Thanks in advance!

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Which rdisk value in boot.ini maps to which disk?

    - by MA1
    Following are the contents of a sample boot.ini: [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /NOEXECUTE=OPTIN /FASTDETECT multi(0)disk(0)rdisk(0)partition(2)\WINNT="Windows 2000 Professional" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /NOEXECUTE=OPTIN /FASTDETECT The rdisk value tells the physical disk number. So, if I have three hard disks say: /dev/sda /dev/sdb /dev/sdc Then how to know which disk (/dev/sda or /dev/sdb or /dev/sdc) is rdisk(0) and which disk is rdisk(1), etc.?

    Read the article

  • why am i getting error in this switch statement written in c

    - by mekasperasky
    I have a character array b which stores different identifiers in different iterations . I have to compare b with various identifiers of the programming language C and print it into a file . When i do it using the following switch statement it gives me errors b[i]='\0'; switch(b[i]) { case "if":fprintf(fp2,"if ----> IDENTIFIER \n"); case "then":fprintf(fp2,"then ----> IDENTIFIER \n"); case "else":fprintf(fp2,"else ----> IDENTIFIER \n"); case "switch":fprintf(fp2,"switch ----> IDENTIFIER \n"); case 'printf':fprintf(fp2,"prtintf ----> IDENTIFIER \n"); case 'scanf':fprintf(fp2,"else ----> IDENTIFIER \n"); case 'NULL':fprintf(fp2,"NULL ----> IDENTIFIER \n"); case 'int':fprintf(fp2,"INT ----> IDENTIFIER \n"); case 'char':fprintf(fp2,"char ----> IDENTIFIER \n"); case 'float':fprintf(fp2,"float ----> IDENTIFIER \n"); case 'long':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'double':fprintf(fp2,"double ----> IDENTIFIER \n"); case 'char':fprintf(fp2,"char ----> IDENTIFIER \n"); case 'const':fprintf(fp2,"const ----> IDENTIFIER \n"); case 'continue':fprintf(fp2,"continue ----> IDENTIFIER \n"); case 'break':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'for':fprintf(fp2,"long ----> IDENTIFIER \n"); case 'size of':fprintf(fp2,"size of ----> IDENTIFIER \n"); case 'register':fprintf(fp2,"register ----> IDENTIFIER \n"); case 'short':fprintf(fp2,"short ----> IDENTIFIER \n"); case 'auto':fprintf(fp2,"auto ----> IDENTIFIER \n"); case 'while':fprintf(fp2,"while ----> IDENTIFIER \n"); case 'do':fprintf(fp2,"do ----> IDENTIFIER \n"); case 'case':fprintf(fp2,"case ----> IDENTIFIER \n"); } the error being lex.c:94:13: warning: character constant too long for its type lex.c:95:13: warning: character constant too long for its type lex.c:96:13: warning: multi-character character constant lex.c:97:13: warning: multi-character character constant lex.c:98:13: warning: multi-character character constant lex.c:99:13: warning: character constant too long for its type lex.c:100:13: warning: multi-character character constant lex.c:101:13: warning: character constant too long for its type lex.c:102:13: warning: multi-character character constant lex.c:103:13: warning: character constant too long for its type lex.c:104:13: warning: character constant too long for its type lex.c:105:13: warning: character constant too long for its type lex.c:106:13: warning: multi-character character constant lex.c:107:13: warning: character constant too long for its type lex.c:108:13: warning: character constant too long for its type lex.c:109:13: warning: character constant too long for its type lex.c:110:12: warning: multi-character character constant lex.c:111:13: warning: character constant too long for its type lex.c:112:13: warning: multi-character character constant lex.c:113:13: warning: multi-character character constant lex.c: In function ‘int main()’: lex.c:90: error: case label does not reduce to an integer constant lex.c:91: error: case label does not reduce to an integer constant lex.c:92: error: case label does not reduce to an integer constant lex.c:93: error: case label does not reduce to an integer constant lex.c:94: warning: overflow in implicit constant conversion lex.c:95: warning: overflow in implicit constant conversion lex.c:95: error: duplicate case value lex.c:94: error: previously used here lex.c:96: warning: overflow in implicit constant conversion lex.c:97: warning: overflow in implicit constant conversion lex.c:98: warning: overflow in implicit constant conversion lex.c:99: warning: overflow in implicit constant conversion lex.c:99: error: duplicate case value lex.c:97: error: previously used here lex.c:100: warning: overflow in implicit constant conversion lex.c:101: warning: overflow in implicit constant conversion lex.c:102: warning: overflow in implicit constant conversion lex.c:102: error: duplicate case value lex.c:98: error: previously used here lex.c:103: warning: overflow in implicit constant conversion lex.c:103: error: duplicate case value lex.c:97: error: previously used here lex.c:104: warning: overflow in implicit constant conversion lex.c:104: error: duplicate case value lex.c:101: error: previously used here lex.c:105: warning: overflow in implicit constant conversion lex.c:106: warning: overflow in implicit constant conversion lex.c:106: error: duplicate case value lex.c:98: error: previously used here lex.c:107: warning: overflow in implicit constant conversion lex.c:107: error: duplicate case value lex.c:94: error: previously used here lex.c:108: warning: overflow in implicit constant conversion lex.c:108: error: duplicate case value lex.c:98: error: previously used here lex.c:109: warning: overflow in implicit constant conversion lex.c:109: error: duplicate case value lex.c:97: error: previously used here lex.c:110: warning: overflow in implicit constant conversion lex.c:111: warning: overflow in implicit constant conversion lex.c:111: error: duplicate case value lex.c:101: error: previously used here lex.c:112: warning: overflow in implicit constant conversion lex.c:112: error: duplicate case value lex.c:110: error: previously used here lex.c:113: warning: overflow in implicit constant conversion lex.c:113: error: duplicate case value lex.c:101: error: previously used here

    Read the article

  • Jquery fade and swap an element when clicked which will also relate to an accordian menu

    - by Nik
    You will notice when you click posture 1 the description drops down and images appear on the right. Now when you click posture 2 or posture 3 the images and description change as they should. What I need to do now is - If posture 1 has been clicked and then posture 2 is clicked the posture 1 menu needs to close so that there is only one posture description visible at one time. If I could also make it so that if the current open posture item is clicked so that it closes and there are no open posture descriptions that there also no images displayed on the right. Finally is there a way to make sure only one set of animation images is running, because just say the user goes through all 26 options and they continue to run in the background it may get sluggish (thanks to Nick Craver for bringing that up). At this stage only posture 1, 2 and 3 are available. Ok finally some code - //Description drop-down boxes $(document).ready(function(){ //Hide (Collapse) the toggle containers on load $(".toggle_container").hide(); //Switch the "Open" and "Close" state per click $("h5.trigger").toggle(function(){ $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); //Slide up and down on click $("h5.trigger").click(function(){ $(this).next(".toggle_container").slideToggle("slow"); }); }); //Images on the right fade in and out thanks to aSeptik $(document).ready(function(){ $('#section_Q_01,#section_Q_02,#section_Q_03').hide(); $(function() { $('h5.trigger a').click( function(e) { e.preventDefault(); var trigger_id = $(this).parent().attr('id'); //get id Q_## $('.current').removeClass('current').hide(); //add a class for easy access & hide $('#section_' + trigger_id).addClass('current').fadeIn(5000); //show clicked one }); }); }); //Fading pics $(document).ready(function(){ $('.pics').cycle({ fx: 'fade', speed: 2500 }); }); Description boxes - <h5 class="trigger" id="Q_01" ><a href="#">Posture 1 : Standing Deep Breathing :</a></h5> <div class="toggle_container" > <div class="block"> <span class="sc">Pranayama Series</span> <p class="bold">Benefits:</p> </div> </div> <h5 class="trigger" id="Q_02" ><a href="#">Posture 2 : Half Moon Pose With Hands To Feet Pose :</a></h5> <div class="toggle_container"> <div class="block"> <span class="sc">Ardha Chandrasana with Pada-Hastasana</span> <p class="bold">Benefits:</p> </div> </div> <h5 class="trigger" id="Q_03" ><a href="#">Posture 3 : Awkward Pose :</a></h5> <div class="toggle_container"> <div class="block"> <span class="sc">Utkatasana</span> <p class="bold">Benefits:</p> </div> </div> and the images on the right - <div id="section_Q_01" class="01"> <div class="pics"> <img src="../images/multi/poses/pose1/Pranayama._01.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._02.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._03.jpg"/> </div> </div> <div id="section_Q_02" class="02"> <div class="pics"> <img src="../images/multi/poses/pose2/Half_Moon_Pose_04.jpg" /> <img src="../images/multi/poses/pose2/Backward_Bending_05.jpg" /> <img src="../images/multi/poses/pose2/Hands_to_Feet_Pose_06.jpg" /> </div> </div> <div id="section_Q_03" class="03"> <div class="pics"> <img src="../images/multi/poses/pose3/Awkward_01.jpg" /> <img src="../images/multi/poses/pose3/Awkward_02.jpg" /> <img src="../images/multi/poses/pose3/Awkward_03.jpg" /> </div> </div> It would be a bonus if images faded out when another element is clicked... but not a big deal. Thanks for having a look

    Read the article

  • .NET mulithreading and quad core processors

    - by w0051977
    I have a single threaded application that runs on a machine with a quad core processor. The scheduled tasks that run VB.NET forms are too slow. I am new to multi threading and parallel computing. If you have a single threaded application that runs on a server with a multi core processor then does the application only ever use one of the processors? What happens if you have multiple scheduled tasks and multiple instances are in memory at the same time? I have read this question on Stackoverflow: http://stackoverflow.com/questions/607775/how-to-write-net-applications-that-utilize-multi-core-processors, but I am still not clear.

    Read the article

  • How to calculate maximum number of request in 128 MB VPS performance?

    - by ifdion
    I am a newbie here, please let me know if I'm using wrong webmaster terms. I am currently setting up a VPS for a multi site WordPress. The VPS uses Debian 6 LNMP setup and the DNS is being taken care by another service. Currently the VPS is running non multi site WordPress with -+ 83 MB RAM out of 128MB. As far as I know the performance is relative to the number of request, not the number of sites in the multi site setup. The question How do I calculate maximum number of request in with that setup? If the information is not enough, what other factor do I need to know? Thank you in advance.

    Read the article

  • XMI format error loading project on argouml

    - by Tom Brito
    Have anyone experienced this (org.argouml.model.)XmiException opening a project lastest version of argouml? XMI format error : org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace If this file was produced by a tool other than ArgoUML, please check to make sure that the file is in a supported format, including both UML and XMI versions. If you believe that the file is legal UML/XMI and should have loaded or if it was produced by any version of ArgoUML, please report the problem as a bug by going to http://argouml.tigris.org/project_bugs.html. System Info: ArgoUML version : 0.30 Java Version : 1.6.0_15 Java Vendor : Sun Microsystems Inc. Java Vendor URL : http://java.sun.com/ Java Home Directory : /usr/lib/jvm/java-6-sun-1.6.0.15/jre Java Classpath : /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/deploy.jar Operation System : Linux, Version 2.6.31-20-generic Architecture : i386 User Name : wellington User Home Directory : /home/wellington Current Directory : /home/wellington JVM Total Memory : 34271232 JVM Free Memory : 10512336 Error occurred at : Thu Apr 01 11:21:10 BRT 2010 Cause : org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:307) at org.argouml.persistence.ModelMemberFilePersister.readModels(ModelMemberFilePersister.java:273) at org.argouml.persistence.XmiFilePersister.doLoad(XmiFilePersister.java:261) at org.argouml.ui.ProjectBrowser.loadProject(ProjectBrowser.java:1597) at org.argouml.ui.LoadSwingWorker.construct(LoadSwingWorker.java:89) at org.argouml.ui.SwingWorker.doConstruct(SwingWorker.java:153) at org.argouml.ui.SwingWorker$2.run(SwingWorker.java:281) at java.lang.Thread.run(Thread.java:619) Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:232) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1359) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:136) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:98) at org.netbeans.lib.jmi.xmi.SAXReader.read(SAXReader.java:56) at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:233) ... 7 more Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiElement$Instance.setReferenceValues(XmiElement.java:699) at org.netbeans.lib.jmi.xmi.XmiElement$Instance.resolveAttributeValue(XmiElement.java:772) at org.netbeans.lib.jmi.xmi.XmiElement$Instance. (XmiElement.java:496) at org.netbeans.lib.jmi.xmi.XmiContext.resolveInstanceOrReference(XmiContext.java:688) at org.netbeans.lib.jmi.xmi.XmiElement$ObjectValues.startSubElement(XmiElement.java:1460) at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:219) ... 22 more ------- Full exception : org.argouml.persistence.XmiFormatException: org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace at org.argouml.persistence.ModelMemberFilePersister.readModels(ModelMemberFilePersister.java:298) at org.argouml.persistence.XmiFilePersister.doLoad(XmiFilePersister.java:261) at org.argouml.ui.ProjectBrowser.loadProject(ProjectBrowser.java:1597) at org.argouml.ui.LoadSwingWorker.construct(LoadSwingWorker.java:89) at org.argouml.ui.SwingWorker.doConstruct(SwingWorker.java:153) at org.argouml.ui.SwingWorker$2.run(SwingWorker.java:281) at java.lang.Thread.run(Thread.java:619) Caused by: org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:307) at org.argouml.persistence.ModelMemberFilePersister.readModels(ModelMemberFilePersister.java:273) ... 6 more Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:232) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1359) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:136) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:98) at org.netbeans.lib.jmi.xmi.SAXReader.read(SAXReader.java:56) at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:233) ... 7 more Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiElement$Instance.setReferenceValues(XmiElement.java:699) at org.netbeans.lib.jmi.xmi.XmiElement$Instance.resolveAttributeValue(XmiElement.java:772) at org.netbeans.lib.jmi.xmi.XmiElement$Instance. (XmiElement.java:496) at org.netbeans.lib.jmi.xmi.XmiContext.resolveInstanceOrReference(XmiContext.java:688) at org.netbeans.lib.jmi.xmi.XmiElement$ObjectValues.startSubElement(XmiElement.java:1460) at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:219) ... 22 more the original project was created on argo v0.28.1, and (as I remember) have only use case diagrams. and yes, I'll report at the specified argo website either.. :) But anyone know anything about this exception?

    Read the article

  • Cross Compiling Boost for use on the Gumstix Overo with GumROS

    - by amelim
    I'm trying to cross-compile boost for use with the ROS framework on a Gumstix Overo. I've been following the posted instructions here (modifying the script when need be), however I've come across an issue where bjam will not compile boost properly. I call bjam as follows: # boost if [ ! -f /opt/gumros/lib/libboost_date_time-gcc41-mt-1_38.so ] ; then if [ ! -f boost_1_38_0.tar.gz ] ; then wget --tries=10 http://heanet.dl.sourceforge.net/sourceforge/boost/boost_1_38_0.tar.gz fi # tar xzf boost_1_38_0.tar.gz cd boost_1_38_0 GPP_PATH=${OVEROTOP}/tmp/cross/armv7a/arm-angstrom-linux-gnueabi/bin/g++ GPP_VER=`${GPP_PATH} -v 2>&1 | tail -1 | awk '{print $3}'` echo "using gcc : ${GPP_VER} : ${GPP_PATH} ; " > tools/build/v2/user-config.jam sudo apt-get install bjam set +o errexit sudo bjam --toolset=gcc-${GPP_VER} --prefix=/opt/gumros --with-date_time install set -o errexit cd .. else echo "boost appears to be already installed; skipping." fi if [ ! -f /opt/gumros/lib/libboost_date_time-gcc41-mt-1_38.so ] ; then echo "Failed to compile libboost_date_time"; exit; fi I've checked the user-config to make sure everything was kosher as well as making sure the GPP_PATH is correct. However, when I run the scrip I come across compilation errors such as: Reading package lists... Done Building dependency tree Reading state information... Done bjam is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. ...patience... ...found 14370 targets... ...updating 14 targets... gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_month.o g++: error trying to exec 'cc1plus': execvp: No such file or directory "/home/andrew/overo-oe/tmp/cross/armv7a/arm-angstrom-linux-gnueabi/bin/g++" -ftemplate-depth-128 -O3 -finline-functions -Wno-inline -Wall -pthread -fPIC -DBOOST_ALL_DYN_LINK=1 -DBOOST_ALL_NO_LIB=1 -DDATE_TIME_INLINE -DNDEBUG -I"." -c -o "bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_month.o" "libs/date_time/src/gregorian/greg_month.cpp" ...failed gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_month.o... gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_weekday.o g++: error trying to exec 'cc1plus': execvp: No such file or directory "/home/andrew/overo-oe/tmp/cross/armv7a/arm-angstrom-linux-gnueabi/bin/g++" -ftemplate-depth-128 -O3 -finline-functions -Wno-inline -Wall -pthread -fPIC -DBOOST_ALL_DYN_LINK=1 -DBOOST_ALL_NO_LIB=1 -DDATE_TIME_INLINE -DNDEBUG -I"." -c -o "bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_weekday.o" "libs/date_time/src/gregorian/greg_weekday.cpp" ...failed gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_weekday.o... gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/date_generators.o g++: error trying to exec 'cc1plus': execvp: No such file or directory Etc... For reference, I'm using this tutorial to help me out. http://www.ros.org/wiki/gumros

    Read the article

  • Reading email address from contacts fails with weird memory issue

    - by CapsicumDreams
    Hi all, I'm stumped. I'm trying to get a list of all the email address a person has. I'm using the ABPeoplePickerNavigationController to select the person, which all seems fine. I'm setting my ABRecordRef personDealingWith; from the person argument to - (BOOL)peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person property:(ABPropertyID)property identifier:(ABMultiValueIdentifier)identifier { and everything seems fine up till this point. The first time the following code executes, all is well. When subsequently run, I can get issues. First, the code: // following line seems to make the difference (issue 1) // NSLog(@"%d", ABMultiValueGetCount(ABRecordCopyValue(personDealingWith, kABPersonEmailProperty))); // construct array of emails ABMultiValueRef multi = ABRecordCopyValue(personDealingWith, kABPersonEmailProperty); CFIndex emailCount = ABMultiValueGetCount(multi); if (emailCount 0) { // collect all emails in array for (CFIndex i = 0; i < emailCount; i++) { CFStringRef emailRef = ABMultiValueCopyValueAtIndex(multi, i); [emailArray addObject:(NSString *)emailRef]; CFRelease(emailRef); } } // following line also matters (issue 2) CFRelease(multi); If compiled as written, the are no errors or static analysis problems. This crashes with a *** -[Not A Type retain]: message sent to deallocated instance 0x4e9dc60 error. But wait, there's more! I can fix it in either of two ways. Firstly, I can uncomment the NSLog at the top of the function. I get a leak from the NSLog's ABRecordCopyValue every time through, but the code seems to run fine. Also, I can comment out the CFRelease(multi); at the end, which does exactly the same thing. Static compilation errors, but running code. So without a leak, this function crashes. To prevent a crash, I need to haemorrhage memory. Neither is a great solution. Can anyone point out what's going on?

    Read the article

  • JMF RTPManager transmitting side

    - by TacB0sS
    I was wondering please, the RTP manager in the JMF can perform as a uni-cast,multi-cast, uni-multi-cast, if the session is multi cast the you add the local address to the target list, why is that? what is the logic and effect behind this? thanks for your help, Adam.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Need an Overview of Possibilities for multicolumn programming

    - by Sam
    Hi folks, From source1 and source2 i gather that IE9 will NOT support multi-column css3!! Since it is still the most popular browser (another thing i cannot understand), i am left but no other choice than to use Programming Power to make multi-columns work. Now, I use three divs that float to left, and which are manually filled with text. Please don't laugh i know its stupid! But I would wish to not to have to worry about the columns and just have a one piece of (un-interrupted) text which all goes into only 1 div, and then have a program smart enough to split it up into X equally wide columns. Question: before i start reinvent the wheel, what methods of programming power have you known that tackle this elegantly? Please suggest your best working multi-column layout sources so I can evaluate which option is the best (I will update the below table). Exploring all possibilities 2011 and further, to enable multi column text user experience: Language Author SourceCodeUsage WorksOnAllMajorBrowser? ================================================================================= html manual labour put text manually in separate left-floating divs "Y" // Upside: control! Downside: few changes necessitates to reflow 3 divs manually! CSS3 w3c css3.info/preview/multi-column-layout/ "N" // {-moz-column-count: 3; -webkit-column-count: 3; } Thats all! javascript a list apart will add url soon ? // php ? ? ? //

    Read the article

  • ThickBox - update inside without redirect

    - by Alex Maslakov
    ASP.NET MVC and jQuery ThickBox. I show some content in ThickBox. It includes file upload form The view multi-media.apsx (it's strange, the this editor doesn't allow start the line with "<" in the code) form action="/upload/multi-media" method="post" enctype="multipart/form-data" label for="file"File name: input type="file" name="file" id="file" / input type="submit" value="Upload" / <% if (Model.Count > 0) { foreach (FileInfo mediaFile in Model) { <img width="100px" height="100px" src="<%: ResolveUrl("~/audio.png") %>" border="0" alt="<%: mediaFile.Name %>" / //................ After file upload I redirect to /upload/multi-media action and content shows in normal window, not in ThickBox. [ActionName("multi-media"), HttpPost] public ActionResult MultiMedia(HttpPostedFileBase file) { if (file.ContentLength > 0) { file.SaveAs(GenerateNewFileName(fullFileName)); return View("multi-media", model); } } How can I stay into ThickBox after file upload? I need show content in ThickBox all the time, even after the file upload.

    Read the article

  • SQL Server 2012 - AlwaysOn

    - by Claus Jandausch
    Ich war nicht nur irritiert, ich war sogar regelrecht schockiert - und für einen kurzen Moment sprachlos (was nur selten der Fall ist). Gerade eben hatte mich jemand gefragt "Wann Oracle denn etwas Vergleichbares wie AlwaysOn bieten würde - und ob überhaupt?" War ich hier im falschen Film gelandet? Ich konnte nicht anders, als meinen Unmut kundzutun und zu erklären, dass die Fragestellung normalerweise anders herum läuft. Zugegeben - es mag vielleicht strittige Punkte geben im Vergleich zwischen Oracle und SQL Server - bei denen nicht unbedingt immer Oracle die Nase vorn haben muss - aber das Thema Clustering für Hochverfügbarkeit (HA), Disaster Recovery (DR) und Skalierbarkeit gehört mit Sicherheit nicht dazu. Dieses Erlebnis hakte ich am Nachgang als Einzelfall ab, der so nie wieder vorkommen würde. Bis ich kurz darauf eines Besseren belehrt wurde und genau die selbe Frage erneut zu hören bekam. Diesmal sogar im Exadata-Umfeld und einem Oracle Stretch Cluster. Einmal ist keinmal, doch zweimal ist einmal zu viel... Getreu diesem alten Motto war mir klar, dass man das so nicht länger stehen lassen konnte. Ich habe keine Ahnung, wie die Microsoft Marketing Abteilung es geschafft hat, unter dem AlwaysOn Brading eine innovative Technologie vermuten zu lassen - aber sie hat ihren Job scheinbar gut gemacht. Doch abgesehen von einem guten Marketing, stellt sich natürlich die Frage, was wirklich dahinter steckt und wie sich das Ganze mit Oracle vergleichen lässt - und ob überhaupt? Damit wären wir wieder bei der ursprünglichen Frage angelangt.  So viel zum Hintergrund dieses Blogbeitrags - von meiner Antwort handelt der restliche Blog. "Windows was the God ..." Um den wahren Unterschied zwischen Oracle und Microsoft verstehen zu können, muss man zunächst das bedeutendste Microsoft Dogma kennen. Es lässt sich schlicht und einfach auf den Punkt bringen: "Alles muss auf Windows basieren." Die Überschrift dieses Absatzes ist kein von mir erfundener Ausspruch, sondern ein Zitat. Konkret stammt es aus einem längeren Artikel von Kurt Eichenwald in der Vanity Fair aus dem August 2012. Er lautet Microsoft's Lost Decade und sei jedem ans Herz gelegt, der die "Microsoft-Maschinerie" unter Steve Ballmer und einige ihrer Kuriositäten besser verstehen möchte. "YOU TALKING TO ME?" Microsoft C.E.O. Steve Ballmer bei seiner Keynote auf der 2012 International Consumer Electronics Show in Las Vegas am 9. Januar   Manche Dinge in diesem Artikel mögen überspitzt dargestellt erscheinen - sind sie aber nicht. Vieles davon kannte ich bereits aus eigener Erfahrung und kann es nur bestätigen. Anderes hat sich mir erst so richtig erschlossen. Insbesondere die folgenden Passagen führten zum Aha-Erlebnis: “Windows was the god—everything had to work with Windows,” said Stone... “Every little thing you want to write has to build off of Windows (or other existing roducts),” one software engineer said. “It can be very confusing, …” Ich habe immer schon darauf hingewiesen, dass in einem SQL Server Failover Cluster die Microsoft Datenbank eigentlich nichts Nenneswertes zum Geschehen beiträgt, sondern sich voll und ganz auf das Windows Betriebssystem verlässt. Deshalb muss man auch die Windows Server Enterprise Edition installieren, soll ein Failover Cluster für den SQL Server eingerichtet werden. Denn hier werden die Cluster Services geliefert - nicht mit dem SQL Server. Er ist nur lediglich ein weiteres Server Produkt, für das Windows in Ausfallszenarien genutzt werden kann - so wie Microsoft Exchange beispielsweise, oder Microsoft SharePoint, oder irgendein anderes Server Produkt das auf Windows gehostet wird. Auch Oracle kann damit genutzt werden. Das Stichwort lautet hier: Oracle Failsafe. Nur - warum sollte man das tun, wenn gleichzeitig eine überlegene Technologie wie die Oracle Real Application Clusters (RAC) zur Verfügung steht, die dann auch keine Windows Enterprise Edition voraussetzen, da Oracle die eigene Clusterware liefert. Welche darüber hinaus für kürzere Failover-Zeiten sorgt, da diese Cluster-Technologie Datenbank-integriert ist und sich nicht auf "Dritte" verlässt. Wenn man sich also schon keine technischen Vorteile mit einem SQL Server Failover Cluster erkauft, sondern zusätzlich noch versteckte Lizenzkosten durch die Lizenzierung der Windows Server Enterprise Edition einhandelt, warum hat Microsoft dann in den vergangenen Jahren seit SQL Server 2000 nicht ebenfalls an einer neuen und innovativen Lösung gearbeitet, die mit Oracle RAC mithalten kann? Entwickler hat Microsoft genügend? Am Geld kann es auch nicht liegen? Lesen Sie einfach noch einmal die beiden obenstehenden Zitate und sie werden den Grund verstehen. Anders lässt es sich ja auch gar nicht mehr erklären, dass AlwaysOn aus zwei unterschiedlichen Technologien besteht, die beide jedoch wiederum auf dem Windows Server Failover Clustering (WSFC) basieren. Denn daraus ergeben sich klare Nachteile - aber dazu später mehr. Um AlwaysOn zu verstehen, sollte man sich zunächst kurz in Erinnerung rufen, was Microsoft bisher an HA/DR (High Availability/Desaster Recovery) Lösungen für SQL Server zur Verfügung gestellt hat. Replikation Basiert auf logischer Replikation und Pubisher/Subscriber Architektur Transactional Replication Merge Replication Snapshot Replication Microsoft's Replikation ist vergleichbar mit Oracle GoldenGate. Oracle GoldenGate stellt jedoch die umfassendere Technologie dar und bietet High Performance. Log Shipping Microsoft's Log Shipping stellt eine einfache Technologie dar, die vergleichbar ist mit Oracle Managed Recovery in Oracle Version 7. Das Log Shipping besitzt folgende Merkmale: Transaction Log Backups werden von Primary nach Secondary/ies geschickt Einarbeitung (z.B. Restore) auf jedem Secondary individuell Optionale dritte Server Instanz (Monitor Server) für Überwachung und Alarm Log Restore Unterbrechung möglich für Read-Only Modus (Secondary) Keine Unterstützung von Automatic Failover Database Mirroring Microsoft's Database Mirroring wurde verfügbar mit SQL Server 2005, sah aus wie Oracle Data Guard in Oracle 9i, war funktional jedoch nicht so umfassend. Für ein HA/DR Paar besteht eine 1:1 Beziehung, um die produktive Datenbank (Principle DB) abzusichern. Auf der Standby Datenbank (Mirrored DB) werden alle Insert-, Update- und Delete-Operationen nachgezogen. Modi Synchron (High-Safety Modus) Asynchron (High-Performance Modus) Automatic Failover Unterstützt im High-Safety Modus (synchron) Witness Server vorausgesetzt     Zur Frage der Kontinuität Es stellt sich die Frage, wie es um diesen Technologien nun im Zusammenhang mit SQL Server 2012 bestellt ist. Unter Fanfaren seinerzeit eingeführt, war Database Mirroring das erklärte Mittel der Wahl. Ich bin kein Produkt Manager bei Microsoft und kann hierzu nur meine Meinung äußern, aber zieht man den SQL AlwaysOn Team Blog heran, so sieht es nicht gut aus für das Database Mirroring - zumindest nicht langfristig. "Does AlwaysOn Availability Group replace Database Mirroring going forward?” “The short answer is we recommend that you migrate from the mirroring configuration or even mirroring and log shipping configuration to using Availability Group. Database Mirroring will still be available in the Denali release but will be phased out over subsequent releases. Log Shipping will continue to be available in future releases.” Damit wären wir endlich beim eigentlichen Thema angelangt. Was ist eine sogenannte Availability Group und was genau hat es mit der vielversprechend klingenden Bezeichnung AlwaysOn auf sich?   SQL Server 2012 - AlwaysOn Zwei HA-Features verstekcne sich hinter dem “AlwaysOn”-Branding. Einmal das AlwaysOn Failover Clustering aka SQL Server Failover Cluster Instances (FCI) - zum Anderen die AlwaysOn Availability Groups. Failover Cluster Instances (FCI) Entspricht ungefähr dem Stretch Cluster Konzept von Oracle Setzt auf Windows Server Failover Clustering (WSFC) auf Bietet HA auf Instanz-Ebene AlwaysOn Availability Groups (Verfügbarkeitsgruppen) Ähnlich der Idee von Consistency Groups, wie in Storage-Level Replikations-Software von z.B. EMC SRDF Abhängigkeiten zu Windows Server Failover Clustering (WSFC) Bietet HA auf Datenbank-Ebene   Hinweis: Verwechseln Sie nicht eine SQL Server Datenbank mit einer Oracle Datenbank. Und auch nicht eine Oracle Instanz mit einer SQL Server Instanz. Die gleichen Begriffe haben hier eine andere Bedeutung - nicht selten ein Grund, weshalb Oracle- und Microsoft DBAs schnell aneinander vorbei reden. Denken Sie bei einer SQL Server Datenbank eher an ein Oracle Schema, das kommt der Sache näher. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema. Wenn Sie die genauen Unterschiede kennen möchten, finden Sie eine detaillierte Beschreibung in meinem Buch "Oracle10g Release 2 für Windows und .NET", erhältich bei Lehmanns, Amazon, etc.   Windows Server Failover Clustering (WSFC) Wie man sieht, basieren beide AlwaysOn Technologien wiederum auf dem Windows Server Failover Clustering (WSFC), um einerseits Hochverfügbarkeit auf Ebene der Instanz zu gewährleisten und andererseits auf der Datenbank-Ebene. Deshalb nun eine kurze Beschreibung der WSFC. Die WSFC sind ein mit dem Windows Betriebssystem geliefertes Infrastruktur-Feature, um HA für Server Anwendungen, wie Microsoft Exchange, SharePoint, SQL Server, etc. zu bieten. So wie jeder andere Cluster, besteht ein WSFC Cluster aus einer Gruppe unabhängiger Server, die zusammenarbeiten, um die Verfügbarkeit einer Applikation oder eines Service zu erhöhen. Falls ein Cluster-Knoten oder -Service ausfällt, kann der auf diesem Knoten bisher gehostete Service automatisch oder manuell auf einen anderen im Cluster verfügbaren Knoten transferriert werden - was allgemein als Failover bekannt ist. Unter SQL Server 2012 verwenden sowohl die AlwaysOn Avalability Groups, als auch die AlwaysOn Failover Cluster Instances die WSFC als Plattformtechnologie, um Komponenten als WSFC Cluster-Ressourcen zu registrieren. Verwandte Ressourcen werden in eine Ressource Group zusammengefasst, die in Abhängigkeit zu anderen WSFC Cluster-Ressourcen gebracht werden kann. Der WSFC Cluster Service kann jetzt die Notwendigkeit zum Neustart der SQL Server Instanz erfassen oder einen automatischen Failover zu einem anderen Server-Knoten im WSFC Cluster auslösen.   Failover Cluster Instances (FCI) Eine SQL Server Failover Cluster Instanz (FCI) ist eine einzelne SQL Server Instanz, die in einem Failover Cluster betrieben wird, der aus mehreren Windows Server Failover Clustering (WSFC) Knoten besteht und so HA (High Availability) auf Ebene der Instanz bietet. Unter Verwendung von Multi-Subnet FCI kann auch Remote DR (Disaster Recovery) unterstützt werden. Eine weitere Option für Remote DR besteht darin, eine unter FCI gehostete Datenbank in einer Availability Group zu betreiben. Hierzu später mehr. FCI und WSFC Basis FCI, das für lokale Hochverfügbarkeit der Instanzen genutzt wird, ähnelt der veralteten Architektur eines kalten Cluster (Aktiv-Passiv). Unter SQL Server 2008 wurde diese Technologie SQL Server 2008 Failover Clustering genannt. Sie nutzte den Windows Server Failover Cluster. In SQL Server 2012 hat Microsoft diese Basistechnologie unter der Bezeichnung AlwaysOn zusammengefasst. Es handelt sich aber nach wie vor um die klassische Aktiv-Passiv-Konfiguration. Der Ablauf im Failover-Fall ist wie folgt: Solange kein Hardware-oder System-Fehler auftritt, werden alle Dirty Pages im Buffer Cache auf Platte geschrieben Alle entsprechenden SQL Server Services (Dienste) in der Ressource Gruppe werden auf dem aktiven Knoten gestoppt Die Ownership der Ressource Gruppe wird auf einen anderen Knoten der FCI transferriert Der neue Owner (Besitzer) der Ressource Gruppe startet seine SQL Server Services (Dienste) Die Connection-Anforderungen einer Client-Applikation werden automatisch auf den neuen aktiven Knoten mit dem selben Virtuellen Network Namen (VNN) umgeleitet Abhängig vom Zeitpunkt des letzten Checkpoints, kann die Anzahl der Dirty Pages im Buffer Cache, die noch auf Platte geschrieben werden müssen, zu unvorhersehbar langen Failover-Zeiten führen. Um diese Anzahl zu drosseln, besitzt der SQL Server 2012 eine neue Fähigkeit, die Indirect Checkpoints genannt wird. Indirect Checkpoints ähnelt dem Fast-Start MTTR Target Feature der Oracle Datenbank, das bereits mit Oracle9i verfügbar war.   SQL Server Multi-Subnet Clustering Ein SQL Server Multi-Subnet Failover Cluster entspricht vom Konzept her einem Oracle RAC Stretch Cluster. Doch dies ist nur auf den ersten Blick der Fall. Im Gegensatz zu RAC ist in einem lokalen SQL Server Failover Cluster jeweils nur ein Knoten aktiv für eine Datenbank. Für die Datenreplikation zwischen geografisch entfernten Sites verlässt sich Microsoft auf 3rd Party Lösungen für das Storage Mirroring.     Die Verbesserung dieses Szenario mit einer SQL Server 2012 Implementierung besteht schlicht darin, dass eine VLAN-Konfiguration (Virtual Local Area Network) nun nicht mehr benötigt wird, so wie dies bisher der Fall war. Das folgende Diagramm stellt dar, wie der Ablauf mit SQL Server 2012 gehandhabt wird. In Site A und Site B wird HA jeweils durch einen lokalen Aktiv-Passiv-Cluster sichergestellt.     Besondere Aufmerksamkeit muss hier der Konfiguration und dem Tuning geschenkt werden, da ansonsten völlig inakzeptable Failover-Zeiten resultieren. Dies liegt darin begründet, weil die Downtime auf Client-Seite nun nicht mehr nur von der reinen Failover-Zeit abhängt, sondern zusätzlich von der Dauer der DNS Replikation zwischen den DNS Servern. (Rufen Sie sich in Erinnerung, dass wir gerade von Multi-Subnet Clustering sprechen). Außerdem ist zu berücksichtigen, wie schnell die Clients die aktualisierten DNS Informationen abfragen. Spezielle Konfigurationen für Node Heartbeat, HostRecordTTL (Host Record Time-to-Live) und Intersite Replication Frequeny für Active Directory Sites und Services werden notwendig. Default TTL für Windows Server 2008 R2: 20 Minuten Empfohlene Einstellung: 1 Minute DNS Update Replication Frequency in Windows Umgebung: 180 Minuten Empfohlene Einstellung: 15 Minuten (minimaler Wert)   Betrachtet man diese Werte, muss man feststellen, dass selbst eine optimale Konfiguration die rigiden SLAs (Service Level Agreements) heutiger geschäftskritischer Anwendungen für HA und DR nicht erfüllen kann. Denn dies impliziert eine auf der Client-Seite erlebte Failover-Zeit von insgesamt 16 Minuten. Hierzu ein Auszug aus der SQL Server 2012 Online Dokumentation: Cons: If a cross-subnet failover occurs, the client recovery time could be 15 minutes or longer, depending on your HostRecordTTL setting and the setting of your cross-site DNS/AD replication schedule.    Wir sind hier an einem Punkt unserer Überlegungen angelangt, an dem sich erklärt, weshalb ich zuvor das "Windows was the God ..." Zitat verwendet habe. Die unbedingte Abhängigkeit zu Windows wird zunehmend zum Problem, da sie die Komplexität einer Microsoft-basierenden Lösung erhöht, anstelle sie zu reduzieren. Und Komplexität ist das Letzte, was sich CIOs heutzutage wünschen.  Zur Ehrenrettung des SQL Server 2012 und AlwaysOn muss man sagen, dass derart lange Failover-Zeiten kein unbedingtes "Muss" darstellen, sondern ein "Kann". Doch auch ein "Kann" kann im unpassenden Moment unvorhersehbare und kostspielige Folgen haben. Die Unabsehbarkeit ist wiederum Ursache vieler an der Implementierung beteiligten Komponenten und deren Abhängigkeiten, wie beispielsweise drei Cluster-Lösungen (zwei von Microsoft, eine 3rd Party Lösung). Wie man die Sache auch dreht und wendet, kommt man an diesem Fakt also nicht vorbei - ganz unabhängig von der Dauer einer Downtime oder Failover-Zeiten. Im Gegensatz zu AlwaysOn und der hier vorgestellten Version eines Stretch-Clusters, vermeidet eine entsprechende Oracle Implementierung eine derartige Komplexität, hervorgerufen duch multiple Abhängigkeiten. Den Unterschied machen Datenbank-integrierte Mechanismen, wie Fast Application Notification (FAN) und Fast Connection Failover (FCF). Für Oracle MAA Konfigurationen (Maximum Availability Architecture) sind Inter-Site Failover-Zeiten im Bereich von Sekunden keine Seltenheit. Wenn Sie dem Link zur Oracle MAA folgen, finden Sie außerdem eine Reihe an Customer Case Studies. Auch dies ist ein wichtiges Unterscheidungsmerkmal zu AlwaysOn, denn die Oracle Technologie hat sich bereits zigfach in höchst kritischen Umgebungen bewährt.   Availability Groups (Verfügbarkeitsgruppen) Die sogenannten Availability Groups (Verfügbarkeitsgruppen) sind - neben FCI - der weitere Baustein von AlwaysOn.   Hinweis: Bevor wir uns näher damit beschäftigen, sollten Sie sich noch einmal ins Gedächtnis rufen, dass eine SQL Server Datenbank nicht die gleiche Bedeutung besitzt, wie eine Oracle Datenbank, sondern eher einem Oracle Schema entspricht. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema.   Eine Verfügbarkeitsgruppe setzt sich zusammen aus einem Set mehrerer Benutzer-Datenbanken, die im Falle eines Failover gemeinsam als Gruppe behandelt werden. Eine Verfügbarkeitsgruppe unterstützt ein Set an primären Datenbanken (primäres Replikat) und einem bis vier Sets von entsprechenden sekundären Datenbanken (sekundäre Replikate).       Es können jedoch nicht alle SQL Server Datenbanken einer AlwaysOn Verfügbarkeitsgruppe zugeordnet werden. Der SQL Server Spezialist Michael Otey zählt in seinem SQL Server Pro Artikel folgende Anforderungen auf: Verfügbarkeitsgruppen müssen mit Benutzer-Datenbanken erstellt werden. System-Datenbanken können nicht verwendet werden Die Datenbanken müssen sich im Read-Write Modus befinden. Read-Only Datenbanken werden nicht unterstützt Die Datenbanken in einer Verfügbarkeitsgruppe müssen Multiuser Datenbanken sein Sie dürfen nicht das AUTO_CLOSE Feature verwenden Sie müssen das Full Recovery Modell nutzen und es muss ein vollständiges Backup vorhanden sein Eine gegebene Datenbank kann sich nur in einer einzigen Verfügbarkeitsgruppe befinden und diese Datenbank düerfen nicht für Database Mirroring konfiguriert sein Microsoft empfiehl außerdem, dass der Verzeichnispfad einer Datenbank auf dem primären und sekundären Server identisch sein sollte Wie man sieht, eignen sich Verfügbarkeitsgruppen nicht, um HA und DR vollständig abzubilden. Die Unterscheidung zwischen der Instanzen-Ebene (FCI) und Datenbank-Ebene (Availability Groups) ist von hoher Bedeutung. Vor kurzem wurde mir gesagt, dass man mit den Verfügbarkeitsgruppen auf Shared Storage verzichten könne und dadurch Kosten spart. So weit so gut ... Man kann natürlich eine Installation rein mit Verfügbarkeitsgruppen und ohne FCI durchführen - aber man sollte sich dann darüber bewusst sein, was man dadurch alles nicht abgesichert hat - und dies wiederum für Desaster Recovery (DR) und SLAs (Service Level Agreements) bedeutet. Kurzum, um die Kombination aus beiden AlwaysOn Produkten und der damit verbundene Komplexität kommt man wohl in der Praxis nicht herum.    Availability Groups und WSFC AlwaysOn hängt von Windows Server Failover Clustering (WSFC) ab, um die aktuellen Rollen der Verfügbarkeitsreplikate einer Verfügbarkeitsgruppe zu überwachen und zu verwalten, und darüber zu entscheiden, wie ein Failover-Ereignis die Verfügbarkeitsreplikate betrifft. Das folgende Diagramm zeigt de Beziehung zwischen Verfügbarkeitsgruppen und WSFC:   Der Verfügbarkeitsmodus ist eine Eigenschaft jedes Verfügbarkeitsreplikats. Synychron und Asynchron können also gemischt werden: Availability Modus (Verfügbarkeitsmodus) Asynchroner Commit-Modus Primäres replikat schließt Transaktionen ohne Warten auf Sekundäres Synchroner Commit-Modus Primäres Replikat wartet auf Commit von sekundärem Replikat Failover Typen Automatic Manual Forced (mit möglichem Datenverlust) Synchroner Commit-Modus Geplanter, manueller Failover ohne Datenverlust Automatischer Failover ohne Datenverlust Asynchroner Commit-Modus Nur Forced, manueller Failover mit möglichem Datenverlust   Der SQL Server kennt keinen separaten Switchover Begriff wie in Oracle Data Guard. Für SQL Server werden alle Role Transitions als Failover bezeichnet. Tatsächlich unterstützt der SQL Server keinen Switchover für asynchrone Verbindungen. Es gibt nur die Form des Forced Failover mit möglichem Datenverlust. Eine ähnliche Fähigkeit wie der Switchover unter Oracle Data Guard ist so nicht gegeben.   SQL Sever FCI mit Availability Groups (Verfügbarkeitsgruppen) Neben den Verfügbarkeitsgruppen kann eine zweite Failover-Ebene eingerichtet werden, indem SQL Server FCI (auf Shared Storage) mit WSFC implementiert wird. Ein Verfügbarkeitesreplikat kann dann auf einer Standalone Instanz gehostet werden, oder einer FCI Instanz. Zum Verständnis: Die Verfügbarkeitsgruppen selbst benötigen kein Shared Storage. Diese Kombination kann verwendet werden für lokale HA auf Ebene der Instanz und DR auf Datenbank-Ebene durch Verfügbarkeitsgruppen. Das folgende Diagramm zeigt dieses Szenario:   Achtung! Hier handelt es sich nicht um ein Pendant zu Oracle RAC plus Data Guard, auch wenn das Bild diesen Eindruck vielleicht vermitteln mag - denn alle sekundären Knoten im FCI sind rein passiv. Es existiert außerdem eine weitere und ernsthafte Einschränkung: SQL Server Failover Cluster Instanzen (FCI) unterstützen nicht das automatische AlwaysOn Failover für Verfügbarkeitsgruppen. Jedes unter FCI gehostete Verfügbarkeitsreplikat kann nur für manuelles Failover konfiguriert werden.   Lesbare Sekundäre Replikate Ein oder mehrere Verfügbarkeitsreplikate in einer Verfügbarkeitsgruppe können für den lesenden Zugriff konfiguriert werden, wenn sie als sekundäres Replikat laufen. Dies ähnelt Oracle Active Data Guard, jedoch gibt es Einschränkungen. Alle Abfragen gegen die sekundäre Datenbank werden automatisch auf das Snapshot Isolation Level abgebildet. Es handelt sich dabei um eine Versionierung der Rows. Microsoft versuchte hiermit die Oracle MVRC (Multi Version Read Consistency) nachzustellen. Tatsächlich muss man die SQL Server Snapshot Isolation eher mit Oracle Flashback vergleichen. Bei der Implementierung des Snapshot Isolation Levels handelt sich um ein nachträglich aufgesetztes Feature und nicht um einen inhärenten Teil des Datenbank-Kernels, wie im Falle Oracle. (Ich werde hierzu in Kürze einen weiteren Blogbeitrag verfassen, wenn ich mich mit der neuen SQL Server 2012 Core Lizenzierung beschäftige.) Für die Praxis entstehen aus der Abbildung auf das Snapshot Isolation Level ernsthafte Restriktionen, derer man sich für den Betrieb in der Praxis bereits vorab bewusst sein sollte: Sollte auf der primären Datenbank eine aktive Transaktion zu dem Zeitpunkt existieren, wenn ein lesbares sekundäres Replikat in die Verfügbarkeitsgruppe aufgenommen wird, werden die Row-Versionen auf der korrespondierenden sekundären Datenbank nicht sofort vollständig verfügbar sein. Eine aktive Transaktion auf dem primären Replikat muss zuerst abgeschlossen (Commit oder Rollback) und dieser Transaktions-Record auf dem sekundären Replikat verarbeitet werden. Bis dahin ist das Isolation Level Mapping auf der sekundären Datenbank unvollständig und Abfragen sind temporär geblockt. Microsoft sagt dazu: "This is needed to guarantee that row versions are available on the secondary replica before executing the query under snapshot isolation as all isolation levels are implicitly mapped to snapshot isolation." (SQL Storage Engine Blog: AlwaysOn: I just enabled Readable Secondary but my query is blocked?)  Grundlegend bedeutet dies, dass ein aktives lesbares Replikat nicht in die Verfügbarkeitsgruppe aufgenommen werden kann, ohne das primäre Replikat vorübergehend stillzulegen. Da Leseoperationen auf das Snapshot Isolation Transaction Level abgebildet werden, kann die Bereinigung von Ghost Records auf dem primären Replikat durch Transaktionen auf einem oder mehreren sekundären Replikaten geblockt werden - z.B. durch eine lang laufende Abfrage auf dem sekundären Replikat. Diese Bereinigung wird auch blockiert, wenn die Verbindung zum sekundären Replikat abbricht oder der Datenaustausch unterbrochen wird. Auch die Log Truncation wird in diesem Zustant verhindert. Wenn dieser Zustand längere Zeit anhält, empfiehlt Microsoft das sekundäre Replikat aus der Verfügbarkeitsgruppe herauszunehmen - was ein ernsthaftes Downtime-Problem darstellt. Die Read-Only Workload auf den sekundären Replikaten kann eingehende DDL Änderungen blockieren. Obwohl die Leseoperationen aufgrund der Row-Versionierung keine Shared Locks halten, führen diese Operatioen zu Sch-S Locks (Schemastabilitätssperren). DDL-Änderungen durch Redo-Operationen können dadurch blockiert werden. Falls DDL aufgrund konkurrierender Lese-Workload blockiert wird und der Schwellenwert für 'Recovery Interval' (eine SQL Server Konfigurationsoption) überschritten wird, generiert der SQL Server das Ereignis sqlserver.lock_redo_blocked, welches Microsoft zum Kill der blockierenden Leser empfiehlt. Auf die Verfügbarkeit der Anwendung wird hierbei keinerlei Rücksicht genommen.   Keine dieser Einschränkungen existiert mit Oracle Active Data Guard.   Backups auf sekundären Replikaten  Über die sekundären Replikate können Backups (BACKUP DATABASE via Transact-SQL) nur als copy-only Backups einer vollständigen Datenbank, Dateien und Dateigruppen erstellt werden. Das Erstellen inkrementeller Backups ist nicht unterstützt, was ein ernsthafter Rückstand ist gegenüber der Backup-Unterstützung physikalischer Standbys unter Oracle Data Guard. Hinweis: Ein möglicher Workaround via Snapshots, bleibt ein Workaround. Eine weitere Einschränkung dieses Features gegenüber Oracle Data Guard besteht darin, dass das Backup eines sekundären Replikats nicht ausgeführt werden kann, wenn es nicht mit dem primären Replikat kommunizieren kann. Darüber hinaus muss das sekundäre Replikat synchronisiert sein oder sich in der Synchronisation befinden, um das Beackup auf dem sekundären Replikat erstellen zu können.   Vergleich von Microsoft AlwaysOn mit der Oracle MAA Ich komme wieder zurück auf die Eingangs erwähnte, mehrfach an mich gestellte Frage "Wann denn - und ob überhaupt - Oracle etwas Vergleichbares wie AlwaysOn bieten würde?" und meine damit verbundene (kurze) Irritation. Wenn Sie diesen Blogbeitrag bis hierher gelesen haben, dann kennen Sie jetzt meine darauf gegebene Antwort. Der eine oder andere Punkt traf dabei nicht immer auf Jeden zu, was auch nicht der tiefere Sinn und Zweck meiner Antwort war. Wenn beispielsweise kein Multi-Subnet mit im Spiel ist, sind alle diesbezüglichen Kritikpunkte zunächst obsolet. Was aber nicht bedeutet, dass sie nicht bereits morgen schon wieder zum Thema werden könnten (Sag niemals "Nie"). In manch anderes Fettnäpfchen tritt man wiederum nicht unbedingt in einer Testumgebung, sondern erst im laufenden Betrieb. Erst recht nicht dann, wenn man sich potenzieller Probleme nicht bewusst ist und keine dedizierten Tests startet. Und wer AlwaysOn erfolgreich positionieren möchte, wird auch gar kein Interesse daran haben, auf mögliche Schwachstellen und den besagten Teufel im Detail aufmerksam zu machen. Das ist keine Unterstellung - es ist nur menschlich. Außerdem ist es verständlich, dass man sich in erster Linie darauf konzentriert "was geht" und "was gut läuft", anstelle auf das "was zu Problemen führen kann" oder "nicht funktioniert". Wer will schon der Miesepeter sein? Für mich selbst gesprochen, kann ich nur sagen, dass ich lieber vorab von allen möglichen Einschränkungen wissen möchte, anstelle sie dann nach einer kurzen Zeit der heilen Welt schmerzhaft am eigenen Leib erfahren zu müssen. Ich bin davon überzeugt, dass es Ihnen nicht anders geht. Nachfolgend deshalb eine Zusammenfassung all jener Punkte, die ich im Vergleich zur Oracle MAA (Maximum Availability Architecture) als unbedingt Erwähnenswert betrachte, falls man eine Evaluierung von Microsoft AlwaysOn in Betracht zieht. 1. AlwaysOn ist eine komplexe Technologie Der SQL Server AlwaysOn Stack ist zusammengesetzt aus drei verschiedenen Technlogien: Windows Server Failover Clustering (WSFC) SQL Server Failover Cluster Instances (FCI) SQL Server Availability Groups (Verfügbarkeitsgruppen) Man kann eine derartige Lösung nicht als nahtlos bezeichnen, wofür auch die vielen von Microsoft dargestellten Einschränkungen sprechen. Während sich frühere SQL Server Versionen in Richtung eigener HA/DR Technologien entwickelten (wie Database Mirroring), empfiehlt Microsoft nun die Migration. Doch weshalb dieser Schwenk? Er führt nicht zu einem konsisten und robusten Angebot an HA/DR Technologie für geschäftskritische Umgebungen.  Liegt die Antwort in meiner These begründet, nach der "Windows was the God ..." noch immer gilt und man die Nachteile der allzu engen Kopplung mit Windows nicht sehen möchte? Entscheiden Sie selbst ... 2. Failover Cluster Instanzen - Kein RAC-Pendant Die SQL Server und Windows Server Clustering Technologie basiert noch immer auf dem veralteten Aktiv-Passiv Modell und führt zu einer Verschwendung von Systemressourcen. In einer Betrachtung von lediglich zwei Knoten erschließt sich auf Anhieb noch nicht der volle Mehrwert eines Aktiv-Aktiv Clusters (wie den Real Application Clusters), wie er von Oracle bereits vor zehn Jahren entwickelt wurde. Doch kennt man die Vorzüge der Skalierbarkeit durch einfaches Hinzufügen weiterer Cluster-Knoten, die dann alle gemeinsam als ein einziges logisches System zusammenarbeiten, versteht man was hinter dem Motto "Pay-as-you-Grow" steckt. In einem Aktiv-Aktiv Cluster geht es zwar auch um Hochverfügbarkeit - und ein Failover erfolgt zudem schneller, als in einem Aktiv-Passiv Modell - aber es geht eben nicht nur darum. An dieser Stelle sei darauf hingewiesen, dass die Oracle 11g Standard Edition bereits die Nutzung von Oracle RAC bis zu vier Sockets kostenfrei beinhaltet. Möchten Sie dazu Windows nutzen, benötigen Sie keine Windows Server Enterprise Edition, da Oracle 11g die eigene Clusterware liefert. Sie kommen in den Genuss von Hochverfügbarkeit und Skalierbarkeit und können dazu die günstigere Windows Server Standard Edition nutzen. 3. SQL Server Multi-Subnet Clustering - Abhängigkeit zu 3rd Party Storage Mirroring  Die SQL Server Multi-Subnet Clustering Architektur unterstützt den Aufbau eines Stretch Clusters, basiert dabei aber auf dem Aktiv-Passiv Modell. Das eigentlich Problematische ist jedoch, dass man sich zur Absicherung der Datenbank auf 3rd Party Storage Mirroring Technologie verlässt, ohne Integration zwischen dem Windows Server Failover Clustering (WSFC) und der darunterliegenden Mirroring Technologie. Wenn nun im Cluster ein Failover auf Instanzen-Ebene erfolgt, existiert keine Koordination mit einem möglichen Failover auf Ebene des Storage-Array. 4. Availability Groups (Verfügbarkeitsgruppen) - Vier, oder doch nur Zwei? Ein primäres Replikat erlaubt bis zu vier sekundäre Replikate innerhalb einer Verfügbarkeitsgruppe, jedoch nur zwei im Synchronen Commit Modus. Während dies zwar einen Vorteil gegenüber dem stringenten 1:1 Modell unter Database Mirroring darstellt, fällt der SQL Server 2012 damit immer noch weiter zurück hinter Oracle Data Guard mit bis zu 30 direkten Stanbdy Zielen - und vielen weiteren durch kaskadierende Ziele möglichen. Damit eignet sich Oracle Active Data Guard auch für die Bereitstellung einer Reader-Farm Skalierbarkeit für Internet-basierende Unternehmen. Mit AwaysOn Verfügbarkeitsgruppen ist dies nicht möglich. 5. Availability Groups (Verfügbarkeitsgruppen) - kein asynchrones Switchover  Die Technologie der Verfügbarkeitsgruppen wird auch als geeignetes Mittel für administrative Aufgaben positioniert - wie Upgrades oder Wartungsarbeiten. Man muss sich jedoch einem gravierendem Defizit bewusst sein: Im asynchronen Verfügbarkeitsmodus besteht die einzige Möglichkeit für Role Transition im Forced Failover mit Datenverlust! Um den Verlust von Daten durch geplante Wartungsarbeiten zu vermeiden, muss man den synchronen Verfügbarkeitsmodus konfigurieren, was jedoch ernstzunehmende Auswirkungen auf WAN Deployments nach sich zieht. Spinnt man diesen Gedanken zu Ende, kommt man zu dem Schluss, dass die Technologie der Verfügbarkeitsgruppen für geplante Wartungsarbeiten in einem derartigen Umfeld nicht effektiv genutzt werden kann. 6. Automatisches Failover - Nicht immer möglich Sowohl die SQL Server FCI, als auch Verfügbarkeitsgruppen unterstützen automatisches Failover. Möchte man diese jedoch kombinieren, wird das Ergebnis kein automatisches Failover sein. Denn ihr Zusammentreffen im Failover-Fall führt zu Race Conditions (Wettlaufsituationen), weshalb diese Konfiguration nicht länger das automatische Failover zu einem Replikat in einer Verfügbarkeitsgruppe erlaubt. Auch hier bestätigt sich wieder die tiefere Problematik von AlwaysOn, mit einer Zusammensetzung aus unterschiedlichen Technologien und der Abhängigkeit zu Windows. 7. Problematische RTO (Recovery Time Objective) Microsoft postioniert die SQL Server Multi-Subnet Clustering Architektur als brauchbare HA/DR Architektur. Bedenkt man jedoch die Problematik im Zusammenhang mit DNS Replikation und den möglichen langen Wartezeiten auf Client-Seite von bis zu 16 Minuten, sind strenge RTO Anforderungen (Recovery Time Objectives) nicht erfüllbar. Im Gegensatz zu Oracle besitzt der SQL Server keine Datenbank-integrierten Technologien, wie Oracle Fast Application Notification (FAN) oder Oracle Fast Connection Failover (FCF). 8. Problematische RPO (Recovery Point Objective) SQL Server ermöglicht Forced Failover (erzwungenes Failover), bietet jedoch keine Möglichkeit zur automatischen Übertragung der letzten Datenbits von einem alten zu einem neuen primären Replikat, wenn der Verfügbarkeitsmodus asynchron war. Oracle Data Guard hingegen bietet diese Unterstützung durch das Flush Redo Feature. Dies sichert "Zero Data Loss" und beste RPO auch in erzwungenen Failover-Situationen. 9. Lesbare Sekundäre Replikate mit Einschränkungen Aufgrund des Snapshot Isolation Transaction Level für lesbare sekundäre Replikate, besitzen diese Einschränkungen mit Auswirkung auf die primäre Datenbank. Die Bereinigung von Ghost Records auf der primären Datenbank, wird beeinflusst von lang laufenden Abfragen auf der lesabaren sekundären Datenbank. Die lesbare sekundäre Datenbank kann nicht in die Verfügbarkeitsgruppe aufgenommen werden, wenn es aktive Transaktionen auf der primären Datenbank gibt. Zusätzlich können DLL Änderungen auf der primären Datenbank durch Abfragen auf der sekundären blockiert werden. Und imkrementelle Backups werden hier nicht unterstützt.   Keine dieser Restriktionen existiert unter Oracle Data Guard.

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • B2B Commerce Best Practice Round Table

    - by Jeri Kelley
    Are you struggling with delivering customers a consistent B2B multi-channel commerce experience? If yes, then you will want to join us for a panel discussion featuring Oracle customers and B2B commerce experts on Thursday, September 27th to learn how leading B2B companies are succeeding in the new age of commerce. Topics of discussion will include: Moving B2B data and content online Multiple site management Mobile platforms Merchandising and personalization Don’t miss this opportunity to learn more about the latest trends, challenges and successes in B2B multi-channel commerce. Learn more and register!

    Read the article

  • B2B Commerce Best Practice Round Table

    - by Jeri Kelley
    Are you struggling with delivering customers a consistent B2B multi-channel commerce experience? If yes, then you will want to join us for a panel discussion featuring Oracle customers and B2B commerce experts on Thursday, September 27th to learn how leading B2B companies are succeeding in the new age of commerce. Topics of discussion will include: Moving B2B data and content online Multiple site management Mobile platforms Merchandising and personalization Don’t miss this opportunity to learn more about the latest trends, challenges and successes in B2B multi-channel commerce. Learn more and register!

    Read the article

  • Significance of SEO Submissions

    Search Engine Optimization is an important strategy for making the web occurrence and existence of your company cost effective and fruitful for you. To elevate the interest of your target audience in your website, to pull them towards your online identity and making them browse through your products and services is a very important step in making your business successful in all fields. This multi-faceted multi-beneficial task can bear the sweet fruit of success when it is applied in the best way.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >