Search Results

Search found 88762 results on 3551 pages for 'svn server'.

Page 24/3551 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Mirroring svn repository

    - by cardy
    I have an svn repository and I'd like to have it duplicated over multiple machines for availability purpose. By now when my vps goes down, I'm unable to connect to repository and this is very annoying. Easiest (and expansive) solution is to setup two identical machine and make them work like clones. I'd like to know if there are any alternative (involving 2 machines). Ideally I would have two vps in different datacenters, so if one goes down I can rely on the other. Thanks. I need a mirror both for read and write not only for read. Svn Repos are berkley-db based

    Read the article

  • Forward svn port

    - by ankimal
    We have our svn server on a machine not accessible from the internet. But we need to be able to check out code from the internet over ssh. Given that we can do port forwarding on a machine accessible from the internet, whats the best way to set this up? Internet -> A machine on our network - > svn server (Port forward here? ) If not port forwarding, whats the most secure way of doing this, if there is any?

    Read the article

  • SVN - Server sent unexpected return value (500 Internal Server Error)

    - by person
    I'm using RabbitVCS to work with Google Code, and I just recently started having problems with trying to commit. Whenever I try to commit, it says... Commit failed Server sent unexpected return value (500 Internal Server Error) in response to Checkout request for (some file that is involved in the commit. The file it fails on isn't consistent). I have no idea what is wrong, any help is appreciated, thanks.

    Read the article

  • Cannot access SVN repository from another host within the LAN

    - by akaii
    I'm trying to connect to a repository I've set up on our server from another host on the same network, but the connection is failing. checkout command: svn checkout svn://192.168.11.192/ error: Can't connect to host '192.168.11.192' : Connection refused I tried probing port 3690 with telnet, and I can't seem to connect that way either. I thought the port might be blocked, so I added an entry for port 3690 in sysconfig/iptables, but it doesn't seem to have had any effect at all. I'm sure svnserve is running, because I can checkout the repository on server using the same command above. What can I possibly try next?

    Read the article

  • Browser won't connect to svn server

    - by Tammy Wilson
    This has been driving me nuts. For some reason, I can't access my svn repository using a browser in this laptop that I'm using right now (firefox & ie) The connection just times out. I'm at home right now and the server is in another room. It connects OK there and it also connects OK in my virtual machine in this same laptop. I'm pretty stumped right now and can't figure out why this is happening. I've also checked the proxies and I'm 100% sure I'm not using any at all. The virtual machine running on this laptop is XP 32bit and this one is a Win7 64 bit. Thanks

    Read the article

  • SVN very slow over HTTP (seems auth related)

    - by Sydius
    I'm using SVN version 1.6.6 (r40053) via the command-line in Ubuntu 10.04 and connecting to a remote repository over HTTP that is in the local network. For a while, it worked fine, but has recently become very slow for any operation that requires communication with the repository, however it does eventually work after several minutes (~3m for svn up). Looking at Wireshark, it appears to be taking a full minute between the HTTP auth denied and the subsequent request containing credentials. The issue is local to my machine because other coworkers running Ubuntu are not having the issue and I've tried using my credentials from another machine and it was very fast. I tried deleting the .subversion folder in my home directory and checking everything out fresh, but it didn't help. Update: I think it's auth related. When I check out SVN repositories off of the Internet over HTTP (from Google Code, for example), everything is very fast until I do something that requires a password. Before prompting for the password for the first time, it stalls for at least a minute. Update 2: I set the neon-debug-mask in the SVN settings (in /etc/subversion/servers under [Global]) to 138 and it seems to spending a lot of time on 'auth: Trying Basic challenge...'

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Solution to Puzzle – Simulate LEAD() and LAG() without Using SQL Server 2012 Analytic Function

    - by pinaldave
    Earlier I wrote a series on SQL Server Analytic Functions of SQL Server 2012. During the series to keep the learning maximum and having fun, we had few puzzles. One of the puzzle was simulating LEAD() and LAG() without using SQL Server 2012 Analytic Function. Please read the puzzle here first before reading the solution : Write T-SQL Self Join Without Using LEAD and LAG. When I was originally wrote the puzzle I had done small blunder and the question was a bit confusing which I corrected later on but wrote a follow up blog post on over here where I describe the give-away. Quick Recap: Generate following results without using SQL Server 2012 analytic functions. I had received so many valid answers. Some answers were similar to other and some were very innovative. Some answers were very adaptive and some did not work when I changed where condition. After selecting all the valid answer, I put them in table and ran RANDOM function on the same and selected winners. Here are the valid answers. No Joins and No Analytic Functions Excellent Solution by Geri Reshef – Winner of SQL Server Interview Questions and Answers (India | USA) WITH T1 AS (SELECT Row_Number() OVER(ORDER BY SalesOrderDetailID) N, s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663)) SELECT SalesOrderID,SalesOrderDetailID,OrderQty, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY N/2) END LeadVal, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY N/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) END LagVal FROM T1 ORDER BY SalesOrderID, SalesOrderDetailID, OrderQty; GO No Analytic Function and Early Bird Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription -- a query to emulate LEAD() and LAG() ;WITH s AS ( SELECT 1 AS ldOffset, -- equiv to 2nd param of LEAD 1 AS lgOffset, -- equiv to 2nd param of LAG NULL AS ldDefVal, -- equiv to 3rd param of LEAD NULL AS lgDefVal, -- equiv to 3rd param of LAG ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLd.SalesOrderDetailID, s.ldDefVal) AS LeadValue, ISNULL( sLg.SalesOrderDetailID, s.lgDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLd ON s.row = sLd.row - s.ldOffset LEFT OUTER JOIN s AS sLg ON s.row = sLg.row + s.lgOffset ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and Partition By Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription /* a query to emulate LEAD() and LAG() */ ;WITH s AS ( SELECT 1 AS LeadOffset, /* equiv to 2nd param of LEAD */ 1 AS LagOffset, /* equiv to 2nd param of LAG */ NULL AS LeadDefVal, /* equiv to 3rd param of LEAD */ NULL AS LagDefVal, /* equiv to 3rd param of LAG */ /* Try changing the values of the 4 integer values above to see their effect on the results */ /* The values given above of 0, 0, null and null behave the same as the default 2nd and 3rd parameters to LEAD() and LAG() */ ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLead.SalesOrderDetailID, s.LeadDefVal) AS LeadValue, ISNULL( sLag.SalesOrderDetailID, s.LagDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLead ON s.row = sLead.row - s.LeadOffset /* Try commenting out this next line when LeadOffset != 0 */ AND s.SalesOrderID = sLead.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LEAD() function */ LEFT OUTER JOIN s AS sLag ON s.row = sLag.row + s.LagOffset /* Try commenting out this next line when LagOffset != 0 */ AND s.SalesOrderID = sLag.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LAG() function */ ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and CTE Usage Excellent Solution by Pravin Patel - Winner of SQL Server Interview Questions and Answers (India | USA) --CTE based solution ; WITH cteMain AS ( SELECT SalesOrderID, SalesOrderDetailID, OrderQty, ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS sn FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, sLead.SalesOrderDetailID AS leadvalue, sLeg.SalesOrderDetailID AS leagvalue FROM cteMain AS m LEFT OUTER JOIN cteMain AS sLead ON sLead.sn = m.sn+1 LEFT OUTER JOIN cteMain AS sLeg ON sLeg.sn = m.sn-1 ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty No Analytic Function and Co-Related Subquery Usage Excellent Solution by Pravin Patel – Winner of SQL Server Interview Questions and Answers (India | USA) -- Co-Related subquery SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, ( SELECT MIN(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID >= m.SalesOrderID AND l.SalesOrderDetailID > m.SalesOrderDetailID ) AS lead, ( SELECT MAX(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID <= m.SalesOrderID AND l.SalesOrderDetailID < m.SalesOrderDetailID ) AS leag FROM Sales.SalesOrderDetail AS m WHERE m.SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty This was one of the most interesting Puzzle on this blog. Giveaway Winners will get following giveaways. Geri Reshef and Pravin Patel SQL Server Interview Questions and Answers (India | USA) DHall Pluralsight 30 days Subscription Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • WSAECONNRESET (10054) error using WebDrive to map to a Subversion/Apache WebDAV share

    - by Dylan Beattie
    Hello, I'm using WebDrive to map a drive letter to a WebDAV share running on Subversion with the SVNAutoversioning flag enabled. The Subversion server is running CollabNet Subversion Edge with LDAP authentication. When trying to connect using WebDrive, I get: Connecting to site myserver Connecting to http://myserver/webdrive/ Resolving url myserver to an IP address Url resolved to IP address 192.168.0.12 Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Testing directory listing ... Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Unable to connect to server, error information below Error: Socket receive failure (4507) Operation: Connecting to server Winsock Error: WSAECONNRESET (10054) The httpd.conf file running on the server contains the following section: <Location /webdrive/> DAV svn SVNParentPath "C:\Program Files\Subversion\data\repositories" SVNReposName "My Subversion WebDrive" AuthzSVNAccessFile "C:\Program Files\Subversion\data/conf/svn_access_file" SVNListParentPath On Allow from all AuthType Basic AuthName "My Subversion Repository" AuthBasicProvider csvn-file-users ldap-users Require valid-user ModMimeUsePathInfo on SVNAutoversioning on </Location> and in the Apache error_yyyy_mm_dd.log file on the server, I'm seeing this when I try to connect via WebDAV: [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' OPTIONS webdrive:/ [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:25 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:25 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:25 2011] [notice] Parent: Created child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:25 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Child process is running [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(408): Child 4368: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 288 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 276 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(564): Child 4368: retrieved 2 listeners from parent [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Acquired the start mutex. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting 64 worker threads. [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 4368 [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 49159. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 80. [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [4368] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [4368] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:25 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:28 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:28 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:28 2011] [notice] Parent: Created child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:28 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Child process is running [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(408): Child 5440: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 288 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 276 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(564): Child 5440: retrieved 2 listeners from parent [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Acquired the start mutex. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting 64 worker threads. [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 5440 [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 49159. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 80. Browsing http://myserver/webdrive/ from a web browser is working fine, and I have a similar set-up working perfectly on a different SVN server that isn't running Collabnet but has had Subversion and Apache installed and configured separately. Any ideas? The python version error might be red herring - I've seen it in a couple of places in the log files and in other scenarios it doesn't appear to be breaking anything...

    Read the article

  • Will this increase my Virtual private Server failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange << should live inside MWS2008 ? or standalone without OS? install Team foundation server << should live inside MWS2008 ? or standalone without OS? on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler to measure for this about one VPS can only install what what what service ? because of my resource is limited =.@ may i know if it is install in this way,maybe it seem like defeat the way of "VPS"... what will happen ? or any guideline on this issue (fully configuring the window server 2008 R2) ? Thx for reply

    Read the article

  • Unlocking SVN working copy with unversioned resources

    - by Vijay Dev
    I have a repository which contains some unversioned directories and files. The server running svn was recently changed and since the checkout was done using the url svn://OLD-IP, I relocated my svn working copy, this time to the url svn://NEW-DOMAIN-NAME. Now since there are some unversioned resources, the switch did not happen properly and the working copy got locked. A cleanup operation did not work either because of these unversioned resources. I looked up in the net and found about svn ignore and tried that but to no use. I am unable to release all locks. Any ideas on solving the problem? Once I release the locks, I believe I can use svn ignore and carry on the relocate operation.

    Read the article

  • "svn: Delta source ended unexpectedly" and proxy server

    - by Vladimir Bezugliy
    I cannot checkout working copy if I use proxy server. SVN starting to checkout files but, checkout several files but then stops and shows following error: svn: Delta source ended unexpectedly I can checkout working copy if I work without proxy server. But I constantly have problems if I work via proxy. Is this problem with proxy server or with svn? UPDATED: I tried to access other svn repositories located on different servers via proxy and it seems to me that I have problem only with one svn server. Other svn servers are accessible via proxy server.

    Read the article

  • How to configure SSL on an instance of SQL Server to allow dedicated users to remotely access it?

    - by The Good Boy
    I have configured the instance of SQL Server to allow dedicated users to access it remotely. Connection string Data Source = 192.168.1.2,1433\sqlexpress;etc... has been tested and works. However, I have not configured the SSL to secure the communication. How to configure SSL on an instance of SQL Server to allow dedicated users to remotely access it? edit 1 The dedicated user will administer its database using Sql Server Management Studio. What I want to do is to secure the communication when he/she administers the database using Sql Server Management Studio.

    Read the article

  • svn diff including annotate/blame-alike information of when changes where made by who

    - by Wouter Coekaerts
    Can you add annotate/blame-alike information to svn diff, so that for every changed line it includes which user and revision changed that line? For example, an annotate-diff comparing revisions 8-10 could output something like: 9 user1 - some line that user1 deleted in revision 9 10 user2 + some line that user2 added in revision 10 The context, lines around it which haven't changed, may be included as well or not, doesn't matter. It's not just a matter of "quickly" writing a shell script combining the output of svn diff and svn annotate. annotate for example will never show you who removed a line. It's also not a matter of doing annotate on a revision in the past: We're not interested in who originally added the line that got removed (that's not the one who "caused" the diff), we want to know who removed it. I suspect the only way to implement something to do this is to inspect each and every commit between the two revisions being compared (and somehow map all the changes in the separate diffs to lines in the total diff)... Does there exist a tool that does something like that?

    Read the article

  • Convert svn repository to hg - authentication fails

    - by Kim L
    I'm trying to convert an existing svn repository to a mercurial repo with the following command hg convert <repository> <folder> My problem is that the svn repository's authentication is done with p12 certificates. I'm a bit lost on how to configure the certificate for the hg client so that I can pull the svn repo and convert it. Currently, if I try to run the above command, I get initializing destination hg-client repository abort: error: _ssl.c:480: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure In other words, it cannot find the required certificate. The question is, how do I configure my hg client so that it can use my certificate? I'm using the command line hg client on linux.

    Read the article

  • Subversion: svn protocol with HTTP/HTTPS proxy

    - by Neeraj
    Hi all, I need to do a svn checkout,say svn checkout svn://XYZ.com/trunk. I am using the svn client from behind the proxy. I had accessed other repositries using the http protocol in past but with svn protocol,it fails with "Connection Refused", reason I think being the port not allowed by the proxy.Nonetheless, the HTTP protocol is not supported on the server. However, svn+ssh gets connected but it prompts for an account at that server which I don't have? Is there any way out other than requesting for an account? Note that I can't affect the settings of the proxy server.

    Read the article

  • Work with a local copy of the "master" SVN repo

    - by Werner
    Hi, at work, we have a SVN repo, which we access to thorugh http, like: svn co http://user@machine/PATH even at work and for some misterious reasons, teh connections between local machines and the repo machine are very slow, but the connection between home and work is almost impossible. I wonder if I could do somethin like: 1- get a copy of the "master" SVN repo to my local machine 2- each time i make modifications etc, use svn co http://user@MYLOCALmachine/PATH instead of svn co http://user@machine/PATH 3- when I am back at work, "merge" somehow all the modifications in my local repo to the master one. Sorry, I am ewally new to SVN, any hint would be appreciated. Thanks

    Read the article

  • How to make SVN ignore a folder?

    - by pg
    I want to make SVN ignore everything that is in my wordpress directory. It bring me nothing but headaches because of auto updates to plugins etc. When I... svn propedit svn:ignore ./blog It tells me... svn: None of the environment variables SVN_EDITOR, VISUAL or EDITOR is set, and no 'editor-cmd' run-time configuration option was found So I... [phil@sessions www]$ export SVN_EDITOR=emacs [phil@sessions www]$ svn propedit svn:ignore ./blog But I don't know what to put in here to make it ignore.

    Read the article

  • Getting some perl script errors on execution of svnnotify

    - by user2474633
    I installed svnnotify:2.84, perl module 5.10 for subversion 1.7.11 on redhat release 6. And i am using this command in post-commit hooks to get notified svnnotify --repos-path "$1" --revision "$2" --from [email protected] \ --to-regex-map [email protected]="branches/Test_branch12" \ --smtp xxxxxx.com HTML::ColorDiff >> /tmp/notify.txt 2>&1 once the commit is successful i can see the below mentioned error in the output file. Use of uninitialized value $[0] in exec at /usr/local/share/perl5/SVN/Notify.pm line 2332. Can't exec "": No such file or directory at /usr/local/share/perl5/SVN/Notify.pm line 2332. Use of uninitialized value $[0] in concatenation (.) or string at /usr/local/share/perl5/SVN/Notify.pm line 2332. Cannot exec : No such file or directory Child process exited: 512 Can anyone help on this.

    Read the article

  • svn project with linked common files

    - by Eric
    The src directory of my project is composed by three folders: two sub-projects and some common files. I linked the files of the common directory to the two sub-projects. I've just imported my project to svn but end up with three duplications of the content of the common directory. I'm wondering if svn can deal with this and how. Like an option which specify to not consider links. I thought about deleting in svn linked files from the sub-projects. Thank you, Éric.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >