Search Results

Search found 40581 results on 1624 pages for 'mysql select db'.

Page 543/1624 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Software Center not opening after installing Ice from Peppermint

    - by darkapex
    Software Center is not opening since I installed "Ice" software (used in Peppermint OS) from ppa:kendalltweaver/peppermint and i keep getting this error - $ software-center ERROR:root:DebFileApplication import Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/__init__.py", line 3, in <module> from debfile import DebFileApplication File "/usr/share/software-center/softwarecenter/db/debfile.py", line 25, in <module> from softwarecenter.db.application import Application, AppDetails File "/usr/share/software-center/softwarecenter/db/application.py", line 28, in <module> from softwarecenter.backend.channel import is_channel_available File "/usr/share/software-center/softwarecenter/backend/channel.py", line 25, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 194, in <module> distro_instance = _get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 169, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named Peppermint

    Read the article

  • juju bootstrap fails with a local environment, why?

    - by Braiam
    Each time I try to bootstrap juju using a local enviroment it fails starting the juju-db-braiam-local script as follows: $ sudo juju --debug --verbose bootstrap 2013-10-20 02:28:53 INFO juju.provider.local environprovider.go:32 opening environment "local" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:210 found "10.0.3.1" as address for "lxcbr0" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:234 checking 10.0.3.1:8040 to see if machine agent running storage listener 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:237 nope, start some 2013-10-20 02:28:53 DEBUG juju.environs.tools storage.go:87 Uploading tools for [raring precise] 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:109 looking for: juju 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:150 checking: /usr/bin/jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:156 found existing jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:166 target: /tmp/juju-tools243949228/jujud 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:217 forcing version to 1.14.1.1 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"FORCE-VERSION", Mode:420, Uid:0, Gid:0, Size:8, ModTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"jujud", Mode:493, Uid:0, Gid:0, Size:19179512, ModTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:106 built 1.14.1.1-raring-amd64 (4196kB) 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-precise-amd64 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.boostrap bootstrap.go:57 bootstrapping environment "local" 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:395 create mongo journal dir: /home/braiam/.juju/local/db/journal 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:401 generate server cert 2013-10-20 02:28:55 INFO juju.provider.local environ.go:421 installing service juju-db-braiam-local to /etc/init 2013-10-20 02:28:56 ERROR juju.provider.local environ.go:423 could not install mongo service: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) 2013-10-20 02:28:56 ERROR juju supercommand.go:282 command failed: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) error: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) What is the reason for this error and how to solve it?

    Read the article

  • How to remove the Mint stuff from my system after installing MATE desktop using MINT repository

    - by tinuz
    I installed the MATE desktop using this manual http://www.unixmen.com/linux-tutorials/linux-distributions/linux-distributions4-ubuntu/1975-install-linuxmint12-extensions-mate-a-mgse-in-ubuntu-1110- but now i can't open my Ubuntu Software Center and can't open the settings from the update manager. I removed mate desktop but it doesn't fix the problem, i also reinstalled the software center, software-properties-gtk and software-properties-common using: sudo apt-get update; sudo apt-get --purge --reinstall install software-center software-properties-common software-properties-gtk. But when using this line i get the following error: Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 3 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/735 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 304824 files and directories currently installed.) Preparing to replace software-center 5.0.2 (using .../software-center_5.0.2_all.deb) ... Unpacking replacement software-center ... Preparing to replace software-properties-common 0.81.13.1 (using .../software-properties-common_0.81.13.1_all.deb) ... Unpacking replacement software-properties-common ... Preparing to replace software-properties-gtk 0.81.13.1 (using .../software-properties-gtk_0.81.13.1_all.deb) ... Unpacking replacement software-properties-gtk ... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Unknown media type in type 'interface/x-winamp-skin' Setting up software-center (5.0.2) ... Traceback (most recent call last): File "/usr/sbin/update-software-center", line 38, in <module> from softwarecenter.db.update import rebuild_database File "/usr/share/software-center/softwarecenter/db/update.py", line 59, in <module> from softwarecenter.db.database import parse_axi_values_file File "/usr/share/software-center/softwarecenter/db/database.py", line 26, in <module> from softwarecenter.db.application import Application File "/usr/share/software-center/softwarecenter/db/application.py", line 25, in <module> from softwarecenter.backend.channel import is_channel_available File "/usr/share/software-center/softwarecenter/backend/channel.py", line 25, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 165, in <module> distro_instance=_get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 148, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named LinuxMint Setting up software-properties-common (0.81.13.1) ... Setting up software-properties-gtk (0.81.13.1) ... $ Is there a way to fix this problem without having to reinstall Ubuntu 11.10?? thanks in advance tinuz

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • SQL SERVER – Find First Non-Numeric Character from String

    - by pinaldave
    It is fun when you have to deal with simple problems and there are no out of the box solution. I am sure there are many cases when we needed the first non-numeric character from the string but there is no function available to identify that right away. Here is the quick script I wrote down using PATINDEX. The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Use of PATINDEX SELECT PATINDEX('%[^0-9]%',Col1) 'Position of NonNumeric Character', SUBSTRING(Col1,PATINDEX('%[^0-9]%',Col1),1) 'NonNumeric Character', Col1 'Original Character' FROM MyTable GO DROP TABLE MyTable GO Here is the resultset: Where do I use in the real world – well there are lots of examples. In one of the future blog posts I will cover that as well. Meanwhile, do you have any better way to achieve the same. Do share it here. I will write a follow up blog post with due credit to you. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • TFS 2010 Server Name Change

    - by PearlFactory
    So I thought I would  change the name of my machine so that the other devs can find the TFS server easily. TFS 2005 would use the cool cmd line util tfsadminutil.....alas he is now gone HERE Are the steps to complete Edit the web.config and is usually located on default install C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config <add key="applicationDatabase" value="Data Source=JUSTIN\SQLI01;Initial Catalog=Tfs_Configuration;Integrated Security=True;" /> Next step is to edit previous Solutions/Projects 1) Open the Solution file i.e ProductApp.sln 2) Edit the SccTeamFoundationServer URL under Global section i.e Change this to new name   If you have DB server on same machine ...you will need to go in and remove existing db user account assigned to the tfs DB Remove old [%machine_name%] value i.e Tuned_Dev_PC_12\Justin user from the above DBs No add the new Justin\Justin user account associated with the new machine name to the TFS & Reporing dbs ... dbo or the TFSADMIN & TFSEXEC roles either will do in this case. (or add both ) Now either ReApply user or add New account (remove old account i.e Tuned_Dev_PC_12\justin) If DB permisions are setup correctyly you will get a screen that looks like this   If it pauses or gets stuck you need to look back at the adding correct DB Perms to the i.e JUSTIN\Justin user account Also if your project is still complaining about old TFS name 1) Team\Connect new Team Foundation Server 2) Add\Remove TFS 3) Add New TFS Name  Once you have connected to the new TFS server Reload your project from TFS..this way it removes a lot of the bugs that hang around in the local project\solution This is similar to a VSS2005 and older fix Cheers ( eta about 60-90 mins so weigh up the the need vs payoff. ) Shutdown restart

    Read the article

  • Disaster Recovery Plan&ndash;Rebuild System Disk (Dell Server 2900 with PERC RAID controller)

    - by Jim Lahman
    Goal: Since the system disk is a RAID 1 mirrored set, we can rebuild the shadow set by replacing one of the good sets with a blank disk Steps Shutdown and power down server Remove the disk from bay 9, which is part of the system shadow set. Put this disk on the shelf Insert blank/old disk into the empty bay     Label the new disk before inserting it into the empty bay       Power up server During the booting process, the following message appears: “Some configured disks have been removed from your system…”       Press ‘C’ to Load Configuration utility             Press 'Y' to confirm to load the foreign configuration       In this example, the system shadow set is Disk Group 2.  (Before proceeding, confirm this is the disk group in your case).  Expanding the physical disks shows a disk in bay 8 and a missing disk in bay 9.  This is correct.   Now, we have to include the new inserted disk in this group       RAID controller reporting bay 9 is empty       There may be times when the new disk is seen as a foreign disk.  In this case, do the following:     Foreign disk is reported in bay 9 CTRL-N (Next Page) to Foreign Mgt All the disk groups will be displayed.  Typically, the disk group containing the foreign disk will be grey.  To remove the foreign disk Highlight Controller Press F2 Select Foreign Select Clear (do NOT import the configuration!)       Clear the foreign configuration Now the disk can be brought into the system shadow set disk group as a hot spare   To include the newly inserted disk into the system shadowset disk group, it must be brought in as a hot spare Highlight Disk Group 2 (VD Management) Hit F2 Select 'Manage Ded. HS'     Manage dedicated hot swap Select the disk in bay 9 (Hit space bar to select) Tab to 'OK'.  Hit the return key     Select hot spare to bring into RAID 1 mirror set   Rebuild automatically commences     Rebuild in process   Restart now or restart after rebuild is completed

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Monitoring SQL Server Agent job run times

    - by okeofs
    Introduction A few months back, I was asked how long a particular nightly process took to run. It was a super question and the one thing that struck me was that there were a plethora of factors affecting the processing time. This said, I developed a query to ascertain process run times, the average nightly run times and applied some KPI’s to the end query. The end goal being to enable me to quickly detect anomalies and processes that are running beyond their normal times. As many of you are aware, most of the necessary data for this type of query, lies within the MSDB database. The core portion of the query is shown below.select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration) end as tt from dbo.sysjobs sj with (nolock) inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job' and [sh.Message] like '%The job%') Run_date and run_duration are obvious fields. The field ‘Name’ is the name of the job that we wish to follow. The only major challenge was that the format of the run duration which was not as ‘user friendly’ as I would have liked. As an example, the run duration 1 hour 10 minutes and 3 seconds would be displayed as 11003; whereas I wanted it to display this in a more user friendly manner as 01:10:03. In order to achieve this effect, we need to add leading zeros to the run_duration based upon the case logic shown above. At this point what we need to do add colons between the hours and minutes and one between the minutes and seconds. To achieve this I nested the query shown above (in purple) within a ‘super’ query. Thus the run time ([Run Time]) is constructed concatenating a series of substrings (See below in Blue). select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration,case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a Now that I had each nightly run time in hours, minutes and seconds (01:10:03), I decided that it would very productive to calculate a rolling run time average. To do this, I decided to do the calculations in base units of seconds. This said, I encapsulated the query shown above into a further ‘super’ query (see the code in RED below). This encapsulation is shown below. The astute reader will note that I used implied casting from integer to string, which is not the best method to use however it works. This said and if I were constructing the query again I would definitely do an explicit convert. To Recap: I now have a key field of ‘1’, each and every applicable run date and the total number of SECONDS that the process ran for each run date, all of this data within the #rawdata1 temporary table. Select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from ( select run_date,substring(convert(varchar(20),tt),1,2) + ':' + substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock)on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a )b   Calculating the average run time We now select each run time in seconds from #rawdata1 and place the values into another temporary table called #rawdata2. Once again we create a ‘key’, a hardwired ‘1’. select 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1The purpose of doing so is to make the average time AVG() available to the query immediately without having to do adverse grouping. Applying KPI Logic At this point, we shall apply some logic to determine whether processing times are within the norms. We do this by applying colour names. Obviously, this example is a super one for SSRS and traffic light icons.select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color], Calculating the Average Run Time in Hours Minutes and Seconds and the end of the query. casewhen len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date descThe complete code example use msdbgo/*drop table #rawdata1drop table #rawdata2go*/select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from (select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select name,run_date, run_duration, casewhenlen(run_duration) = 6 then convert(varchar(8),run_duration)whenlen(run_duration) = 5 then '0' + convert(varchar(8),run_duration)whenlen(run_duration) = 4 then '00' + convert(varchar(8),run_duration)whenlen(run_duration) = 3 then '000' + convert(varchar(8),run_duration)whenlen(run_duration) = 2 then '0000' + convert(varchar(8),run_duration)whenlen(run_duration) = 1 then '00000' + convert(varchar(8),run_duration)end as ttfrom dbo.sysjobs sj with (nolock)innerjoin dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where name = 'My Agent Job'and [Message] like '%The job%') a ) bselect 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color],Case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date desc  

    Read the article

  • Suggestions for html tag info required for jQuery Plugin

    - by Toby Allen
    I have written a tiny bit of jQuery which simply selects all the Select form elements on the page and sets the selected property to the correct value. Previously I had to write code to generate the Select in php and specify the Selected attribute for the option that was selected or do loads of if statements in my php page or smarty template. Obviosly the information about what option is selected still needs to be specified somewhere in the page so the jQuery code can select it. I decided to create a new attribute on the Select item <Select name="MySelect" SelectedOption="2"> <-- Custom Attr SelectedOption <option value="1">My Option 1 </option> <option value="2">My Option 2 </option> <-- this option will be selected when jquery code runs <option value="3">My Option 3 </option> <option value="4">My Option 4 </option> </Select> Does anyone see a problem with using a custom attribute to do this, is there a more accepted way, or a better jquery way?

    Read the article

  • PowerDNS 3+ - Recursive queries for subdomains

    - by PDNS Troubles
    We are trying to find functionality in the PDNS 3.x that existed in PDNS < 2.9.2.5. Whereby if we have a domain in the database backend with records, if a query is unable to resolve a subdomain it would then query the recursor setup in the pdns.conf file. We have found that on Centos 6.x the rpm packages are the latest verison of pdns where by 5.x available was pdns-2.9.22-4.el5. The pdns-2.9.22-4.el5 package works as expected but when upgrading servers to Centos 6.x we loose this required functionality. pdns-backend-mysql-2.9.22-4.el5.rpm fails to install on Centos 6.x due to mysql libs that aren't availble, this is caused by an upgrade in the mysql version whereby the pdns backend mysql requires older mysql libs then what is available on centos 6.x . Installing from source is also troublesome with the following errors - http://pastebin.com/B5cUuD08

    Read the article

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dynamically changing DVT Graph at Runtime

    - by mona.rakibe(at)oracle.com
    I recently came across this requirement where the customer wanted to change the graph type at run-time based on some selection. After some internal discussions we realized this can be best achieved by using af:switcher to toggle between multiple graphs. In this blog I will be sharing the sample that I build to demonstrate this.[Note] : In the below sample, every-time user changes graph type there is a server trip, so please use this approach with performance implications in mind.This sample can be downloaded  from DynamicGraph.zipSet-up: Create a BAM data control using employees DO (sample)(Refer this entry)Steps:Create the View Create a new JSF page .From component palette drag and drop "Select One Radio" into this page Enter some Label and click "Finish"In Property Editor set the "AutoSubmit" property to trueNow drag and drop "Switcher" from components into this page.In the Structure pane select the af:switcher right click and surround with "PanelGroupLayout"In Property Editor set the "PartialTriggers"  property of PanelGroupLayout to the id of af:selectOneRadioAgain in the Structure pane select the af:switcher right click and select "Insert inside af:switcher->facet"Enter Facet name (ex: pie)Again in the Structure pane select the af:switcher right click and select "Insert inside af:switcher->facet" Enter  another Facet name (ex: bar)From "Data Controls" drag and drop "Employees->Query"  into the pie facet as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson)From "Data Controls" drag and drop "Employees->Query"  into the bar facet as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson).Now wire the switcher to the af:selectOneRadio using their "facetName" and "value" property respectively.Now run the page, notice that graph renders as per the selection by user.

    Read the article

  • SQL SERVER – Validating Unique Columnname Across Whole Database

    - by pinaldave
    I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. Asker: “Our business requirement is when we add new column we want it unique across current database.” Pinal: “Why do you have such requirement?” Asker: “Do you know the solution?” Pinal: “Sure I can come up with the answer but it will help me to come up with an optimal answer if I know the business need.” Asker: “Thanks – what will be the answer in that case.” Pinal: “Honestly I am just curious about the reason why you need your column name to be unique across database.” (Silence) Pinal: “Alright – here is the answer – I guess you do not want to tell me reason.” Option 1: Check if Column Exists in Current Database IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn') BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END Option 2: Check if Column Exists in Current Database in Specific Table IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn' AND OBJECT_ID = OBJECT_ID(N'tableName')) BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END I guess user did not want to share the reason why he had a unique requirement of having column name unique across databases. Here is my question back to you – have you faced a similar situation ever where you needed unique column name across a database. If not, can you guess what could be the reason for this kind of requirement?  Additional Reference: SQL SERVER – Query to Find Column From All Tables of Database Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

  • Error when setting Piwik analytics

    - by bertran
    I've uploaded the latest version of Piwik unto my web server, which is hosted by go daddy.com, on a linux hosting plan. I'm setting it up (accessing it from my browser as instructed) and I have the "Piwikinstallation" page open on step 3 (database set-up ) of 9. I don't know what to imput in the field "database server"... the default is the number 127.0.0.1 When I leave that input as is, and click "Next" leaving the gives the error: "Error when trying to connect to database server: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 111" and changing that input to "localhost" gives me another error: "Error when trying to connect to database server:SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)"

    Read the article

  • Two things I learned this week...

    - by noreply(at)blogger.com (Thomas Kyte)
    I often say "I learn something new about Oracle every day".  It really is true - there is so much to know about it, it is hard to keep up sometimes.Here are the two new things I learned - the first is regarding temporary tablespaces.  In the past - when people have asked "how can I shrink my temporary tablespace" I've said "create a new one that is smaller, alter your database/users to use this new one by default, wait a bit, drop the old one".  Actually I usually said first - "don't, it'll just grow again" but some people really wanted to make it smaller.Now, there is an easier way:http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_3002.htm#SQLRF53578Using alter tablespace temp shrink space .The second thing is just a little sqlplus quirk that I probably knew at one point but totally forgot.  People run into problems with &'s in sqlplus all of the time as sqlplus tries to substitute in for an &variable.  So, if they try to select '&hello world' from dual - they'll get:ops$tkyte%ORA11GR2> select '&hello world' from dual;Enter value for hello: old   1: select '&hello world' from dualnew   1: select ' world' from dual'WORLD------ worldops$tkyte%ORA11GR2> One solution is to "set define off" to disable the substitution (or set define to some other character).  Another oft quoted solution is to use chr(38) - select chr(38)||'hello world' from dual.  I never liked that one personally.  Today - I was shown another wayhttps://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4549764300346084350#4573022300346189787 ops$tkyte%ORA11GR2> select '&' || 'hello world' from dual;'&'||'HELLOW------------&hello worldops$tkyte%ORA11GR2>just concatenate '&' to the string, sqlplus doesn't touch that one!  I like that better than chr(38) (but a little less than set define off....)

    Read the article

  • Having trouble locating charms - pointing to store.juju.ubuntu.com? DNS error

    - by dt2511
    Can't seem to sort out how to get juju to point to the right source for charms, base install yields the following result when issued the following command. juju deploy --repository=examples mysql DNS lookup failed: address 'store.juju.ubuntu.com' not found: [Errno -2] Name or service not known. 2011-10-12 18:38:39,946 ERROR DNS lookup failed: address 'store.juju.ubuntu.com' not found: [Errno -2] Name or service not known. When trying to run it with juju deploy --repository=examples local:mysql I get this error: Charm 'local:oneiric/mysql' not found in repository /root/juju/examples 2011-10-12 18:53:57,311 ERROR Charm 'local:oneiric/mysql' not found in repository /root/juju/examples I've put the charm itself in the directory /root/juju/examples, and am running the command from /root/juju. What is wrong?

    Read the article

  • CodePlex Daily Summary for Monday, May 28, 2012

    CodePlex Daily Summary for Monday, May 28, 2012Popular ReleasesScreenShot: InstallScreenShot: This is the current stable release..Net Code Samples: Code Samples: Code samples (SLNs).LINQ_Koans: LinqKoans v.02: Cleaned up a bitKeelKit: KeelKit 3.0.7600.638: ?、??MySQL?Model?? Mysql ????? ? http://dev.mysql.com/downloads/connector/net/ ??? mysql-connector-net-6.5.4.msi ???, VS???KeelKit ???????MySQL , ????????? ?? DemoMySQL.rar ???, ???????????MySqL??Model. ?????? C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machine.config ??? ??????。 <system.data> <DbProviderFactories> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlC...TwitterOAuth: TwitterOauth 0.25.16.0116: Beta releasetesttom05242012git02: d1: d1testdd05242012git001: zxczxc: zxczxczxcCODE Framework: 4.0.20524.0: This release has quite a few enhancements for WPF applications and SOA features. See change logs for more details.CommonLibrary.NET: CommonLibrary.NET 0.9.8 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Application: FluentScript Version: 0.9.8 Build: 0.9.8.4 Changeset: 75050 ( CommonLibrary.NET ) Release date: May 24, 2012 Binaries: CommonLibrary.dll Namespace: ComLib.Lang Project site: http://fluentscript.codeplex.com...System Center Orchestrator Integration Packs: Active Directory 3.2: An integration pack enabling AD Automation 3.2 Updates LDAP Pathing updated to support cross forest scenarios Get Object Property Value Filtering efficiency enhancementsBunch of Small Tools: Mélangeur de vocabulaire japonais: Permet de générer des exercices de vocabulaire aléatoire à partir de listes de vocabulaire japonais. 22 listes sont fournies avec le programme.Expression Tree Visualizer for VS 2010: Expression Tree Visualizer Beta: This is a beta release, in this release some expression types are not handled and use a default visualization behavior. The first release will be published soon. Wait for it...Ulfi: Ulfi source: Build with Visual Studio 2010 Express C# or betterJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 2: JayData is a unified data access library for JavaScript developers to query and update data from different sources like webSQL, indexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video: http://www.youtube.com/watch?v=LlJHgj1y0CU RC1 R2 Release highlights Knockout.js integrationUsing the Knockout.js module, your UI can be automatically refreshed when the data model changes, so you can develop the front-end of your data manager app even faster. Querying 1:N relations in W...Christoc's DotNetNuke Module Development Template: 00.00.08 for DNN6: BEFORE USE YOU need to install the MSBuild Community Tasks available from http://msbuildtasks.tigris.org For best results you should configure your development environment as described in this blog post Then read this latest blog post about customizing and using these custom templates. Installation is simple To use this template place the ZIP (not extracted) file in your My Documents\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\Web OR for VB My Documents\Visual Studio 2010\Te...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.53: fix issue #18106, where member operators on numeric literals caused the member part to be duplicated when not minifying numeric literals ADD NEW FEATURE: ability to create source map files! The first mapfile format to be supported is the Script# format. Use the new -map filename switch to create map files when building your sources.CreditAnalytics: CreditAnalytics Release 1.5: 22 May 2012 (v1.5) (Build 449) Regressor Framework: Implementation of the regressor set, tolerance check, curve scenario regressors, regression framework suite, and the eventual regression output. Discount Curve Regression: Regressing Base Curve Creation, scenario Curve creation, and calculation of spot/effective implied rates and discount factors. Credit Curve Regression: Regressing Base Curve Creation, scenario Curve creation, and calculation of spot/effective implied hazard rates, reco...BlackJumboDog: Ver5.6.3: 2012.05.22 Ver5.6.3  (1) HTTP????????、ftp://??????????????????????LogicCircuit: LogicCircuit 2.12.5.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing start up issue.Orchard Project: Orchard 1.4.2: This is a service release to address 1.4 and 1.4.1 bugs. Please read our release notes for Orchard 1.4.2: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesNew Projects.Net Code Samples: Various .Net code samples. AFSAspnetPusherV1: my new wns project lolAFSAspnetPusherV2: renewed version ofAFSAspnetPusherV4: renewed one v4AgileDesign Utilities Library: This library provides common functionality usable for most software projects: Logger - Asynchronous logging on top of new Microsoft logging class TraceSource with simplified API NameOf - Avoid using string names using static reflection Various reflection helpersAssociate Many to Many Relationship Entities Tool for Dynamics CRM 2011: Associate Many to many relationship tool is used for Dynamics CRM 2011 to associate or disassociate N:N relationship entities. This tool is dynamics crm 2011 solution, which consist of one entity and one plugin. Entity "Many to Many Relationship" record is used by Many to many relationship plugin to associate or disassociate entities. If many to many relationship entity record is created then plugin associate/disassociate entities from record data.Boxhead Multiplayer Server: A PHP dedicated server for my multiplayer version of Boxhead.CodedUITraceFiletoCSV: Console Application to parse the result file generated by Coded UI Test execution ".trx" into a comma separated file for more readable and detailed result.FlipExt: FlipExt is an easy to use image converter. It converts any image to .png .bmp .jpg .gif .tif .jpeg .tiff .ico. More extensions will be added soon.Foo Values Maker: Foo creates values for your test class variables so that you can write tests faster.FoodFree: Projeto de monitoramento de enchentesHouodeProject: ????ITLand of Dreams, Codename: Waterloo: We want to create a classical Live-MMORPG you can play on your smartphone (in the first step only “Windows Phone” will be supported) with the basic idea of Ultima Online or similiar games in our mind. You can create one or more characters, choose some name, gender, basic attributes (skin and hair color, …) a race (e.g. ‘Human’) and a profession (e.g. ‘fighter’ or ‘craftsman’). Now he can freely travel through the whole world, meeting other players, fighting monsters, absolving quests, tra...Makecert UI: Makecert UI is a shell layer application on top of the Microsoft makecert.exe utility. Makecert UI makes it easy for you to create self signed certificates, even from your own CA.MS CRM Rich Text box: Rich text box plug-in for MS CRM 2011. Hope it will be helpful for many of you. Thank you for using it and let me know if any further help needed.Nivo Slider Web Part SharePoint 2010: SharePoint 2010 implementation of Nivo Slider. Easy way to put Nivo slider on any page!!Office365 Weather WebPart: Office 365 WebPart that displays a 5 day weather forecast for a given location. The weather data is retrieved from the Met Office feed hosted on the Windows Azure Data Market. This is a free data feed that provides weather data for the UK only.Private Cloud Solution Design: This project is named “Training Cloud”. It provides an appropriate solution which can be used for technical audiences self learning with a hands-on-lab experience using Microsoft technology hosted in a virtualized environment built on System Center 2012. Since it depends on hardware, such as RAM, Sotrage, Network , etc. At last, the end user could have all labs ready which deloyed on private cloud. And it can be easily matain , setup labs with cloud’s function. pyUpdater: pyupdater provides a platform for updating python based applications.SharePoint Document Navigator: SP Document Navigator is a front-end solution for navigating a document library using jQuery and jQuery Mobile Trimetable: Train schedule for WP7Upload Master Pages & Page Layouts to Master Page Gallery using PowerShell: This document details the steps to upload Master pages and Page Layouts to Master Page Gallery using the “Upload Master Pages” Utility. 1- Download the .zip file 2- Edit the “UploadMasterPages.bat” file and Change the <<site collection url>> in the text below with respect to the environment. e.g. http://sitecollectionurl 3- Save the “UploadMasterPages.bat” file and close it. 4- Put all of your master pages and page layouts to Doc folder. 5- Run “UploadMasterPages.bat” file as Administrat...vivo: vivo Vietnamese Voice Vietnamese Voice recognition project thaihung.bkhn@gmail.com http://eking.vnvnv: VNV Vietnamese Voice Vietnamese Voice recognition project thaihung.bkhn@gmail.com http://eking.vnWindows Phone 7 User Guide Page: Take your app's users through a guided tour! Make your app's hidden gems shine, make users understand your app's logic and UX better.WP-FTS: This plugin for Wordpress replaces the default search engine, implemented using a simple "LIKE" operator, with the usage of the more powerful Full-Text Engine that comes with SQL Server.

    Read the article

  • Software Center not Opening at all Error

    - by Newbie
    When I open software from menu, it says "cannot open software database. Please reinstall the software-center package. When I write software-center on terminal, such error comes: 2014-05-28 09:11:20,584 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2014-05-28 09:11:20,593 - softwarecenter.ui.gtk3.app - ERROR - xapian open failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 302, in __init__ if self.db.schema_version() != DB_SCHEMA_VERSION: File "/usr/share/software-center/softwarecenter/db/database.py", line 289, in schema_version return self.xapiandb.get_metadata("db-schema-version") File "/usr/share/software-center/softwarecenter/db/database.py", line 177, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 190, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3667, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) DatabaseCorruptError: /var/cache/software-center/xapian/iamchert: Chert version file should be 28 bytes, actually 0 Now, when I write command sudo apt-get remove software-center dpkg: error: corrupt info database format file '/var/lib/dpkg/info/format' E: Sub-process /usr/bin/dpkg returned an error code (2) I had ubuntu before but it kind of got corrupted. Now, I have freshly reinstalled it and even at start, software center is not opening and this error comes. I hope you have a solution. Thanks.

    Read the article

  • Should I reinstall my production server from 32 bit to 64 bit if it has 16GB of RAM?

    - by Alexandru Trandafir Catalin
    I have a production server with 16GB of RAM that came with a 32bit CentOs installation. The website hosted on this server is increasing its traffic every day and I am having some MySQL trouble so I tried to check the MySQL configuration with mysqltuner.pl and gave me the following messages: [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** So my question is: can I survive with the 32 bit? Or I will have to install the 64 bit OS? Thanks.

    Read the article

  • android app unable to connect to the hsqldb server

    - by Chinta
    I am trying to connect my android app to the hsql db server. Server runs on computer-1. I can connect to the db server from local machine through java as well as Db-visualizer. I can connect to the db server from another computer(computer-2) using Db-visualizer with comouter-1 ip address. Now trying to connect from my app in Nexus 7 the same way I was connecting from computer-2. I am getting "No Suitable Driver" error. Below is the log. 11-02 12:01:41.235: W/System.err(9803): connection string <jdbc:hsqldb:hsql://192.168.2.6:9001/qBank> 11-02 12:01:41.235: W/System.err(9803): user id string <SA> 11-02 12:01:41.235: W/System.err(9803): password string <> 11-02 12:01:41.235: W/System.err(9803): ERROR: failed to get connection. 11-02 12:01:41.235: W/System.err(9803): java.sql.SQLException: No suitable driver 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:186) 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:213) 11-02 12:01:41.235: W/System.err(9803): at com.scan.util.GatherData.getConnection(GatherData.java:135)

    Read the article

  • Real-time stock market application

    - by Sam
    I'm an amateur programmer. I'd like to develop a software application (like Tradestation), to analyse real-time market data. Please teach me if the following approach is correct, ie the procedures, knowledge or software needed etc: Use a DB to read the real-time feed from data provider: what should be the right DB to use? I know it should be a time serious one. Can I use SQL, Mysql, or others? What database can receive real-time data feed? Do I need to configure the DB to do this? If the real-time data is in ASCII form, how can it be converted to those that can be read by the DB and my application? Should I have to write codes or just use some add-ins? What kind of add-in are needed? How should I code the program to retrieve the changing data from the DB so that the analysis software screen data can also change asynchronously? (like the RTD in excel) Which aspects of programming do I need to learn to develop the above? Are there web resources/ books I can refer to for more information?

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >