Search Results

Search found 27621 results on 1105 pages for 'test plan'.

Page 84/1105 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Query Execution Plan - When is the Where clause executed?

    - by Alex
    I have a query like this (created by LINQ): SELECT [t0].[Id], [t0].[CreationDate], [t0].[CreatorId] FROM [dbo].[DataFTS]('test', 100) AS [t0] WHERE [t0].[CreatorId] = 1 ORDER BY [t0].[RANK] DataFTS is a full-text search table valued function. The query execution plan looks like this: SELECT (0%) - Sort (23%) - Nested Loops (Inner Join) (1%) - Sort (Top N Sort) (25%) - Stream Aggregate (0%) - Stream Aggregate (0%) - Compute Scalar (0%) - Table Valued Function (FullTextMatch) (13%) | | - Clustered Index Seek (38%) Does this mean that the WHERE clause ([CreatorId] = 1) is executed prior to the TVF ( full text search) or after the full text search? Thank you.

    Read the article

  • Why isn't INT more efficient than UNIQUEIDENTIFIER (according to the execution plan)?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation? EDIT: The question is not about which is more efficient, but why is the query execution plan showing both queries as having the same cost?

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • Why does Akonadi on KDE 4.6.0 refuse to start?

    - by Patches
    Akonadi refuses to start on my fresh installation of KDE 4.6.0 from the kubuntu-backports PPA on Ubuntu 10.10 Maverick Meerkat, preventing me from usking KMail. Here is the full error output: patches@pleistocene:~/.local/share$ akonadictl start Starting Akonadi Server... done. patches@pleistocene:~/.local/share$ Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb772e400] 3: [0xb772e416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6e9f941] 5: /lib/libc.so.6(abort+0x182) [0xb6ea2e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb74d62dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb757168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb7581425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb758295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6e8bce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb77ae400] 3: [0xb77ae416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6f1f941] 5: /lib/libc.so.6(abort+0x182) [0xb6f22e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75562dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb75f168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb7601425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb760295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6f0bce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb778b400] 3: [0xb778b416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6efc941] 5: /lib/libc.so.6(abort+0x182) [0xb6effe42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75332dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb75ce68e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb75de425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb75df95d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6ee8ce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb784e400] 3: [0xb784e416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6fbf941] 5: /lib/libc.so.6(abort+0x182) [0xb6fc2e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75f62dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb769168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb76a1425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb76a295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6fabce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) "akonadiserver" crashed too often and will not be restarted! I tried moving the ~/.local/share/akonadi folder and running it fresh, and I also tried starting Akonadi from a brand new user, all to no avail. Requested by @djeikyb: patches@pleistocene:~$ ls -ld ~/.local drwxrwx--- 3 patches patches 4096 2011-02-07 03:15 /home/patches/.local patches@pleistocene:~$ mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed patches@pleistocene:~$ mysql_upgrade -S ~/.local/share/akonadi/socket-pleistocene/ Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--socket=/home/patches/.local/share/akonadi/socket-pleistocene/' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/home/patches/.local/share/akonadi/socket-pleistocene/' (111) when trying to connect FATAL ERROR: Upgrade failed

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • Should I be put off a junior role that uses an online development test?

    - by Ninefingers
    I've applied for a junior development role, or rather been found by a recruiter looking for a developer. In order to get to a telephone interview stage I've been asked to sit one of those online coding assessments. This wasn't quite what I expected. I consider myself a fairly good developer for my age and experience, but I've no illusions about being Don Knuth or anything. The test was a series of incredibly obtuse questions asking about the results of various obscure evaluations. About 30 minutes in I was thinking to myself I hadn't intended to enter an obfuscated code contest/code golf exercise. After my last telephone interview I was asked to build something. I did. That seemed fair. Go away and work this out is more my in office experience of programming than "please evaluate this combination of lambdas, filters, maps, lists, tuples etc". So I'm a little put off, to be honest. I never claimed to know the language inside out or all the little corner cases. My questions, then: Should I be put off? Why? Why not? Are these kinds of tests what I should be expecting for junior roles? Should I learn stuff exam style? That seems to be the objective of these tests, for which you are timed and not supposed to use references or books? Normally, in the course of development I have a fairly good idea of basic types, rules, flow control and whatever. Occasionally I'll come up on something I need to use a regex for and have to go and remind myself of the exact piece of syntax I need if trying what I think should work doesn't. Or I'll come up against a module I've not used before and go and look it up. For example, if I wanted to write a server using sockets in C right now, I'd probably check the last piece of code I wrote doing that (and or the various books I have) and work from there. Chances are I probably couldn't do it exactly from scratch and from memory, although I can tell you you'd need a socket(), bind(), listen() and accept() call and you might also want select() depending on whether you intend to pthread_create or not. So I know what the calls are, but not their specific parameter list. What are your experiences if you are a recruiting manager? Are you after programmers who can quote you the API or do you not mind if your programmers have a few books on their desk and google function calls every so often?

    Read the article

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • How odd this function is? It works in a test project, however, it goes wrong in my project?(Windows Socket) [closed]

    - by user67449
    int SockSend(DataPack &dataPack, SOCKET &sock, char *sockBuf){ int bytesLeft=0, bytesSend=0; int idx=0; bytesLeft=sizeof(dataPack); // ?DataPack?????sockBuf??? memcpy(sockBuf, &dataPack, sizeof(dataPack)); while(bytesLeft>0){ memset(sockBuf, 0, sizeof(sockBuf)); bytesSend=send(sock, &sockBuf[idx], bytesLeft, 0); cout<<"??send()??, bytesSend: "<<bytesSend<<endl; if(bytesSend==SOCKET_ERROR){ cout<<"Error at send()."<<endl; cout<<"Error # "<<WSAGetLastError()<<" happened."<<endl; return 1; } bytesLeft-=bytesSend; idx+=bytesSend; } cout<<"DataPack ???????"<<endl; return 0; } This is the function I defined, which is used to send a user_defined structure DataPack. My code in test project is as follows: char sendBuf[100000]; int res=SockSend(dataPack, sockConn, sendBuf); if(res==1){ cout<<"SockSend()???"<<endl; }else{ cout<<"SockSend()???"<<endl; } My code in my current project is: err=SockSend(dataPackSend, sockConn, sockBuf); if(err==1){ cout<<"SockSend()??"<<endl; exit(0); }else{ cout<<"??? "<<dataPackSend.packNum<<" ?DataPack(??)"<<endl; } Can you tell me where does this function go wrong? I will be appreciated for you answer.

    Read the article

  • What is a ‘best practice’ backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • Is it possible to setup an internal test email server to keep all mail sent to it?

    - by MattGrommes
    We have a need at my work to setup a test email server that will take all mail sent to it for delivery and instead just dump it into an account for later retrieval. I've been out of the email server configuration game long enough that I think that's possible but I don't know for sure. As a more specific example of what we need: We have code that sends emails to outside clients in certain cases. We want to point our code to a test server that will accept those emails, but not let them get to the outside world (yes, it's happened before, oops). We then need to be able to verify that Email X would have gotten sent to Client Y if we had sent to the real server. As a bonus, we have a error email alias on our real server that goes to the programmers that we would like to keep getting email from. So anything sent to that alias on the test server would forward to our real server for delivery. My preference is for postfix but our IT staff seems set on using sendmail (or Exchange) for everything so hints/pointers for either server would be helpful. Thanks a lot.

    Read the article

  • What is a 'best practice' backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • How to store 250TB of data and develop a backup/recovery plan?

    - by luccio
    I'm really new to this topic, so big apology for stupid questions. I have a school project and I want to know how to store 250TB of data with life-cycle for 18 months. It means every record is stored for 18 months and after this period of time can be deleted. There are 2 issues: store data backup data Due to amount of data I will probably need to combine data tapes and hard drives. I'd like to have "fast" access to 3 month old data, so ~42TB on disk. I really don't know what RAID should I use, or is here any better solution than combining disk and data tapes? Thanks for any advice, article, anything. I'm getting lost.

    Read the article

  • Max. Temp. on Intel Burn Test for Stock Dell Precision T3500

    - by HK1
    I'm troubleshooting an issue on a Dell Precision T3500. As part of my troubleshooting I've decided to try running a stress test using Intel Burn Test software. This machine is a stock configuration with 12GB of RAM and a Xeon W3670 processor (nothing overclocked). When I run IBT using the standard mode, SpeedFan reports a processor temperature in excess of 80C. I've seen numbers as high as 90C but even at that temperature the machine does not become unstable or crash. However, it seems way too high. This processor has a TCase of 67.9C according to Intel's website. I'm guessing that means I'm in the danger zone any time I go over that temperature. I've checked the cooling system and everything looks fine. I've even took out the heat sink and reinstalled it with new thermal compound. This did not appear to make the problem better or worse. Is there a discrepancy somewhere here in the way temperatures are measured or displayed? I've also tried using HWMonitor from CPUID and it reports the same temperatures. Should I just let the Standard Test go and disregard the temperature outputs?

    Read the article

  • Power surge PC damage: How can I test all components of my PC without access to a second computer?

    - by Doug T.
    Ever since we had some crazy power surges last week my 64 bit Windows 7 PC has been acting strange. My USB network adapter disconnects from the wireless and can't detect the signal. I have to disable/reenable the adapter to detect it again. Also my wife has reported that the PC has rebooted a few times while I'm not sitting at it. Today I finally caught the reboot while I was using the PC. I got this blue screen of death. Stop Code 0x00000109: "Modification of system code or a critical data structure was detected." I followed the advice at the linked article and ran a memory test. I used memtest86 and its already found around 300,000 errors out of 8 gigs of ram. Now I'm worried -- what are the odds this is isolated to just my memory and not just a system wide problem? Isn't there a good chance that many other components are fried? More importantly, how can I test those other components? Are there tools similar to memtest I can use to test my motherboard/video card/power supply? If these are vender specific, is it typical for vendors to provide testing tools?

    Read the article

  • fftw in Visual Studio?

    - by drhorrible
    I'm trying to link my project with fftw and so far, I've gotten it to compile, but not link. As the site said, I generated all the .lib files (even though I'm only using double precision), and copied them to C:\Program Files\Microsoft Visual Studio 9.0\VC\lib, the .h file to C:\Program Files\Microsoft Visual Studio 9.0\VC\include and the .dll to C:\windows\system32. I've copied the tutorial program, and the exact error I am getting is: 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_free referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_destroy_plan referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_execute referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_plan_dft_1d referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_malloc referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) So, what could be wrong with my project setup? Thanks!

    Read the article

  • Glassfish v3 / JNDI entry cannot be found problems!

    - by REMP
    I've been having problems trying to call an EJB's method from a Java Application Client. Here is the code. EJB Remote Interface package com.test; import javax.ejb.Remote; @Remote public interface HelloBeanRemote { public String sayHello(); } EJB package com.test; import javax.ejb.Stateless; @Stateless (name="HelloBeanExample" , mappedName="ejb/HelloBean") public class HelloBean implements HelloBeanRemote { @Override public String sayHello(){ return "hola"; } } Main class (another project) import com.test.HelloBeanRemote; import javax.naming.Context; import javax.naming.InitialContext; public class Main { public void runTest()throws Exception{ Context ctx = new InitialContext(); HelloBeanRemote bean = (HelloBeanRemote)ctx.lookup("java:global/Test/HelloBeanExample!com.test.HelloBeanRemote"); System.out.println(bean.sayHello()); } public static void main(String[] args)throws Exception { Main main = new Main(); main.runTest(); } } Well, what is my problem? JNDI entry for this EJB cannot be found! java.lang.NullPointerException at com.sun.enterprise.naming.impl.SerialContext.getRemoteProvider(SerialContext.java:297) at com.sun.enterprise.naming.impl.SerialContext.getProvider(SerialContext.java:271) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:430) at javax.naming.InitialContext.lookup(InitialContext.java:392) at testdesktop.Main.runTest(Main.java:22) at testdesktop.Main.main(Main.java:31) Exception in thread "main" javax.naming.NamingException: Lookup failed for 'java:global/Test/HelloBeanExample!com.test.HelloBeanRemote' in SerialContext [Root exception is javax.naming.NamingException: Unable to acquire SerialContextProvider for SerialContext [Root exception is java.lang.NullPointerException]] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442) at javax.naming.InitialContext.lookup(InitialContext.java:392) at testdesktop.Main.runTest(Main.java:22) at testdesktop.Main.main(Main.java:31) Caused by: javax.naming.NamingException: Unable to acquire SerialContextProvider for SerialContext [Root exception is java.lang.NullPointerException] at com.sun.enterprise.naming.impl.SerialContext.getProvider(SerialContext.java:276) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:430) ... 3 more Caused by: java.lang.NullPointerException at com.sun.enterprise.naming.impl.SerialContext.getRemoteProvider(SerialContext.java:297) at com.sun.enterprise.naming.impl.SerialContext.getProvider(SerialContext.java:271) ... 4 more Java Result: 1 I've trying with different JNDI entries but nothing works (I got this entries from NetBeans console): INFO: Portable JNDI names for EJB HelloBeanExample : [java:global/Test/HelloBeanExample, java:global/Test/HelloBeanExample!com.test.HelloBeanRemote] INFO: Glassfish-specific (Non-portable) JNDI names for EJB HelloBeanExample : [ejb/HelloBean, ejb/HelloBean#com.test.HelloBeanRemote] So I tried with the following entries but I got the same exception : java:global/Test/HelloBeanExample java:global/Test/HelloBeanExample!com.test.HelloBeanRemote ejb/HelloBean ejb/HelloBean#com.test.HelloBeanRemote I'm using Netbeans 6.8 and Glassfish v3!

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Why is the meaning of “ours” and “theirs” reversed with git-svn

    - by Marc Liyanage
    I use git-svn and I noticed that when I have to fix a merge conflict after performing a git svn rebase, the meaning of the --ours and --theirs options to e.g. git checkout is reversed. That is, if there's a conflict and I want to keep the version that came from the SVN server and throw away the changes I made locally, I have to use ours, when I would expect it to be theirs. Why is that? Example: mkdir test cd test svnadmin create svnrepo svn co file://$PWD/svnrepo svnwc cd svnwc echo foo > test.txt svn add test.txt svn ci -m 'svn commit 1' cd .. git svn clone file://$PWD/svnrepo gitwc cd svnwc echo bar > test.txt svn ci -m 'svn commit 2' cd .. cd gitwc echo baz > test.txt git commit -a -m 'git commit 1' git svn rebase git checkout --ours test.txt cat test.txt # shows "bar" but I expect "baz" git checkout --theirs test.txt cat test.txt # shows "baz" but I expect "bar"

    Read the article

  • Json select a specific item out for editing with javascript

    - by minus4
    I have the following JSon: {"DPI":"66.8213457076566", "width":"563.341067", "editable":"True", "pricecat":"6", "numpages":"2", "height":"400", "page":[{"filename":"999_9_1.jpg", "line":[{"test":"test 1",lineid:22}, {"test":"test 2",lineid:22}, {"test":"test 3",lineid:22}, {"test":"test 4",lineid:22}, {"test":"blank",lineid:22}]}, {"filename":"999_9_2.jpg", "line":[]}]}# i can do most things with lines like measurements.page[0].line[0].lineid; but what i am really stuck with is when i want to edit a specific line but i only have lineid available and not the line number in the array, for example: measurements.page[0].line[0].test = "new changed value"; however i dod not know the line number I.E 0 line[0] is there a way for me to get the line number of a line that has a lineid value of say 22 ???? and all within javascript thanks

    Read the article

  • Robotium Uniting Testing on an application having multiple processes

    - by warenix
    I have written an application running activities in multiple processes. I tried Robotium by creating a new test project set target package to my application. When I executed it, the test stopped with the following error message: Error in testDisplayBlackBox: java.lang.RuntimeException: Intent in process com.abc.def resolved to different process com.abc.def:mail: Intent { act=android.intent.action.MAIN flg=0x10000000 cmp=com.abc.def/com.abc.def.email.activity.Welcome } at android.app.Instrumentation.startActivitySync(Instrumentation.java:377) at android.test.InstrumentationTestCase.launchActivityWithIntent(InstrumentationTestCase.java:119) at android.test.InstrumentationTestCase.launchActivity(InstrumentationTestCase.java:97) at android.test.ActivityInstrumentationTestCase2.getActivity(ActivityInstrumentationTestCase2.java:104) at com.abc.def.test.TestApk.setUp(TestApk.java:31) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:190) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:175) at android.test.InstrumentationTestRunner.onStart(InstrumentationTestRunner.java:555) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:1584) Test results for InstrumentationTestRunner=.E Time: 0.027 FAILURES!!! Tests run: 1, Failures: 0, Errors: 1 Is it possible to have any workaround provided that I have source code in hand?

    Read the article

  • How do i pass arraycollection to Advancedatagrid using HierarchicalData ?

    - by R.Vijayakumar
    Problem with passing arraycollection to Advance datagrid. My Arraycollection structure like `   private var groupList:ArrayCollection = new ArrayCollection([ {Country:'India', children:[ {Country:'Series1', children:[                                {Matches:'India Test series 1',isEnable:false,id:1,isSelected:true},                                {Matches:'India Test series 2',isEnable:false,id:2,isSelected:true},                                {Matches:'India Test series 3',isEnable:false,id:3,isSelected:true}]},              {Country:'Series2', children:[                                {Matches:'Australia Test series 1',isEnable:false,id:25,isSelected:true},                                {Matches:'Australia Test series 2',isEnable:false,id:26,isSelected:true},                                {Matches:'Australia Test series 3',isEnable:false,id:27,isSelected:true}]} ]}, {Country:'Austrila', children:[ {Country:'Series1', children:[                                {Matches:'Australia Test series 1',isEnable:false,id:46,isSelected:true},                                {Matches:'Australia Test series 2',isEnable:false,id:47,isSelected:true},                                {Matches:'Australia Test series 3',isEnable:false,id:48,isSelected:true}]}, {Country:'Series2', children:[                                {Matches:'Australia Test series 1',isEnable:false,id:49,isSelected:true},                                {Matches:'Australia Test series 2',isEnable:false,id:50,isSelected:true},                                {Matches:'Australia Test series 3',isEnable:false,id:51,isSelected:true}]}, {Country:'Series3', children:[                                {Matches:'Australia Test series 1',isEnable:false,id:52,isSelected:true},                                {Matches:'Australia Test series 2',isEnable:false,id:53,isSelected:true},                                {Matches:'Australia Test series 3',isEnable:false,id:54,isSelected:true}]} ]} passing AD in dataProvider="{new HierarchicalData(groupList)}" It's working fine. it's show two menu of tree and childrens based on country .But i tried dynamic xml convert to Arraycollection by below code private function convertXmlToArrayCollection( file:String ):ArrayCollection { var xml:XMLDocument = new XMLDocument( file ); //var decoder:SimpleXMLDecoder = new SimpleXMLDecoder(); var decoder1:SimpleXMLDecoder = new SimpleXMLDecoder(true); var data1:Object = decoder1.decodeXML( xml ); var array1:Array = ArrayUtil.toArray(data1); return new ArrayCollection( array1 ); } my xml structure is <Country Country="India "> <Country Country="Series "> <Matches Matches="BIndependiente-Colon" id="701536" isEnable="false" isSelected="true" startDate="2009-10-29 01:30:00" EndDate="2009-10-29 01:30:00"/> <Matches Matches="Boca Juniors-Chacarita Juniors" id="701633" isEnable="false" isSelected="true" startDate="2009-10-29 19:00:00" EndDate=""/> </Country> </Country> <Country Country="Australia"> <Country Country="series"> <Matches Matches="BIndependiente-Colon" id="701536" isEnable="false" isSelected="true" startDate="2009-10-29 01:30:00" EndDate="2009-10-29 01:30:00"/> <Matches Matches="Boca Juniors-Chacarita Juniors" id="701633" isEnable="false" isSelected="true" startDate="2009-10-29 19:00:00" EndDate=""/> </Country> </Country> So if i tried to convert this format of xml code to arryacollection , it converted the array collection but when will i pass to Advance data grid it not show any result . What did i wrong ? groupList1= convertXmlToArrayCollection(string1); Alert.show(groupList1[0].Country[0].Matches[0].id.toString());// output is =701536 Where did i mistake it ? Plz kindly any one refer me , What will i changed ?

    Read the article

  • How do I select an item out of a JavaScript array when I only know a value for one of the item's pro

    - by minus4
    I have the following JavaScript object: { "DPI": "66.8213457076566", "width": "563.341067", "editable": "True", "pricecat": "6", "numpages": "2", "height": "400", "page": [{ "filename": "999_9_1.jpg", "line": [{ "test": "test 1", lineid: 22 }, { "test": "test 2", lineid: 22 }, { "test": "test 3", lineid: 22 }, { "test": "test 4", lineid: 22 }, { "test": "blank", lineid: 22 }] }, { "filename": "999_9_2.jpg", "line": [] }] } I can do most things with lines like measurements.page[0].line[0].lineid; But what I am really stuck with is when I want to edit a specific line but I only have the lineid value available (for example 22) and not the line number in the array: measurements.page[0].line[WHAT DO I PUT HERE].test = "new changed value";

    Read the article

  • Strange problem publishing a fresh Ruby on Rails 3 application on localhost (Apache, Passenger and VirtualHosts)

    - by user502052
    I recently created a new Ruby on Rails 3 application locally on a Mac OS, named "test". Since I use apache2, in the private/etc/apache2/httpd.conf I set the VirtualHost for the "test" application: <VirtualHost *:443> ServerName test.pjtmain.localhost:443 DocumentRoot "/Users/<my_user_name>/Sites/test/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/test/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:80> ServerName test.pjtmain.localhost DocumentRoot "/Users/<my_user_name>/Sites/test/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/test/public"> Order allow,deny Allow from all </Directory> </VirtualHost> Of course I restart apache2, but trying to access to http://test.pjtmain.localhost/ I have this error message from: FIREFOX Oops! Firefox could not find test.pjtmain.localhost Suggestions: * Search on Google: ... SAFARI Safari can’t find the server. Safari can’t open the page “http://test.pjtmain.localhost/” because Safari can’t find the server “test.pjtmain.localhost”. I have other RoR3 applications setted like that above in the httpd.conf file and all them work. What is the problem (maybe it is not related to apache...)? Notes: 1. Using the 'Network Uility' I did a Ping with the following result: ping: cannot resolve test.pjtmain.localhost: Unknown host and I did a Lookup with the follonwing result: ; <<>> DiG 9.6.0-APPLE-P2 <<>> test.pjtmain.localhost +multiline +nocomments +nocmd +noquestion +nostats +search ;; global options: +cmd <MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. 115 IN SOA dns1.<MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. dnsmaster.<MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. ( 2010110500 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) 2. I am using Phusion Passenger 3. Since I not changed nothing to the new "test" application, I expect to see the default RoR index.html page: 4. It seems that in the 'Console Messages' there is any warning or error

    Read the article

  • How to get application to use specific version of .NET?

    - by HN
    Hi, I am using pnunit to run nunit tests on remote machines, the pnunit agent loads the test and runs it in Windows 2008 but the test fails to load in Windows 2003, the agent error is INFO PNUnit.Agent.PNUnitAgent - Registering channel on port 9080 INFO PNUnit.Agent.PNUnitAgent - RunTest called for Test MyTest, AssemblyName test.dll, TestToRun test.Program.myDeployTest INFO PNUnit.Agent.PNUnitTestRunner - Spawning a new thread INFO PNUnit.Agent.PNUnitTestRunner - Thread entered for Test MyTest:test.Program.myDeployTest Assembly test.dll Unhandled Exception: System.BadImageFormatException: The format of the file 'test ' is invalid. File name: "test" Server stack trace: at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, B oolean isStringized, Evidence assemblySecurity, Boolean throwOnFileNotFound, Ass embly locationHint, StackCrawlMark& stackMark) On running procmon and monitoring the agent process i could see that the agent executable was using .NET 1.1 assemblies on Windows 2003 and .NET 2.0 on Windows 2008 which could be an explanation for this behavior. How do I get the agent to use .NET 2.0 on Windows 2003?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >