Search Results

Search found 7706 results on 309 pages for 'inner join'.

Page 91/309 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Data mixing SQL Server

    - by Pythonizo
    I have three tables and a range of two dates: Services ServicesClients ServicesClientsDone @StartDate @EndDate Services: ID | Name 1 | Supervisor 2 | Monitor 3 | Manufacturer ServicesClients: IDServiceClient | IDClient | IDService 1 | 1 | 1 2 | 1 | 2 3 | 2 | 2 4 | 2 | 3 ServicesClientsDone: IDServiceClient | Period 1 | 201208 3 | 201210 Period = YYYYMM I need to insert into ServicesClientsDone the months range from @StartDate up @EndDate. I have also a temporary table (#Periods) with the following list: Period 201208 201209 201210 The query I need is to give me back the following list: IDServiceClient | Period 1 | 201209 1 | 201210 2 | 201208 2 | 201209 2 | 201210 3 | 201208 3 | 201209 4 | 201208 4 | 201209 4 | 201210 Which are client services but the ranks of the temporary table, not those who are already inserted This is what i have: Table periods: DECLARE @i int DECLARE @mm int DECLARE @yyyy int, DECLARE @StartDate datetime DECLARE @EndDate datetime set @EndDate = (SELECT GETDATE()) set @StartDate = (SELECT DATEADD(MONTH, -3,GETDATE())) CREATE TABLE #Periods (Period int) set @i = 0 WHILE @i <= DATEDIFF(MONTH, @StartDate , @EndDate ) BEGIN SET @mm= DATEPART(MONTH, DATEADD(MONTH, @i, @FechaInicio)) SET @yyyy= DATEPART(YEAR, DATEADD(MONTH, @i, @FechaInicio)) INSERT INTO #Periods (Period) VALUES (CAST(@yyyy as varchar(4)) + RIGHT('00'+CONVERT(varchar(6), @mm), 2)) SET @i = @i + 1; END Relation between ServicesClients and Services: SELECT s.Name, sc.IDClient FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID Services already done and when: SELECT s.Name, scd.Period FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID JOIN ServicesClientsDone AS scd ON scd.IDServiceClient = sc.IDServiceClient

    Read the article

  • How do I pause main() until all other threads have died?

    - by thechiman
    In my program, I am creating several threads in the main() method. The last line in the main method is a call to System.out.println(), which I don't want to call until all the threads have died. I have tried calling Thread.join() on each thread however that blocks each thread so that they execute sequentially instead of in parallel. Is there a way to block the main() thread until all other threads have finished executing? Here is the relevant part of my code: public static void main(String[] args) { //some other initialization code //Make array of Thread objects Thread[] racecars = new Thread[numberOfRaceCars]; //Fill array with RaceCar objects for(int i=0; i<numberOfRaceCars; i++) { racecars[i] = new RaceCar(laps, args[i]); } //Call start() on each Thread for(int i=0; i<numberOfRaceCars; i++) { racecars[i].start(); try { racecars[i].join(); //This is where I tried to using join() //It just blocks all other threads until the current //thread finishes. } catch(InterruptedException e) { e.printStackTrace(); } } //This is the line I want to execute after all other Threads have finished System.out.println("It's Over!"); } Thanks for the help guys! Eric

    Read the article

  • limiting mysql results by range of a specific key INCLUDING DUPLICATES

    - by aVC
    I have a query SELECT p.*, m.*, (SELECT COUNT(*) FROM newPhotoonAlert n WHERE n.userIDfor='$id' AND n.threadID=p.threadID and n.seen='0') AS unReadCount FROM posts p JOIN myMembers m ON m.id = p.user_id LEFT JOIN following f ON (p.user_id = f.user_id AND f.follower_id='$id' AND f.request='0' AND f.status='1') JOIN myMembers searcher ON searcher.id = '$id' WHERE ((f.follower_id = searcher.id) OR m.id='$id') AND p.flagged <'5' ORDER BY p.threadID DESC,p.positionID It brings result as expected but I want to add Another CLAUSE to limit the results. Say a sample (minimal shown) set of data looks like this with the above query. threadID postID positionID url 564 1254 2 a.com 564 1245 1 a1.com 541 1215 3 b1.com 541 1212 2 b2.com 541 1210 1 b3.com 523 745 1 c1.com 435 689 2 d2.com 435 688 1 a4.com 256 345 1 s3.com 164 316 1 f1.com . . I want to get ROWS corresponding to 2 DISTINCT threadIDs starting from MAX, but I want to include duplicates as well. Something like AND p.threadID IN (Select just Two of all threadIDs currently selected, but include duplicate rows) So my result should be threadID postID positionID url 564 1254 2 a.com 564 1245 1 a1.com 541 1215 3 b1.com 541 1212 2 b2.com 541 1210 1 b3.com

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • MVVM: How to handle interaction between nested ViewModels?

    - by Dan Bryant
    I'm been experimenting with the oft-mentioned MVVM pattern and I've been having a hard time defining clear boundaries in some cases. In my application, I have a dialog that allows me to create a Connection to a Controller. There is a ViewModel class for the dialog, which is simple enough. However, the dialog also hosts an additional control (chosen by a ContentTemplateSelector), which varies depending on the particular type of Controller that's being connected. This control has its own ViewModel. The issue I'm encountering is that, when I close the dialog by pressing OK, I need to actually create the requested connection, which requires information captured in the inner Controller-specific ViewModel class. It's tempting to simply have all of the Controller-specific ViewModel classes implement a common interface that constructs the connection, but should the inner ViewModel really be in charge of this construction? My general question is: are there are any generally-accepted design patterns for how ViewModels should interact with eachother, particularly when a 'parent' VM needs help from a 'child' VM in order to know what to do?

    Read the article

  • Nested SQL Select statement fails on SQL Server 2000, ok on SQL Server 2005

    - by Jay
    Here is the query: INSERT INTO @TempTable SELECT UserID, Name, Address1 = (SELECT TOP 1 [Address] FROM (SELECT TOP 1 [Address] FROM [UserAddress] ua INNER JOIN UserAddressOrder uo ON ua.UserID = uo.UserID WHERE ua.UserID = u.UserID ORDER BY uo.AddressOrder ASC) q ORDER BY AddressOrder DESC), Address2 = (SELECT TOP 1 [Address] FROM (SELECT TOP 2 [Address] FROM [UserAddress] ua INNER JOIN UserAddressOrder uo ON ua.UserID = uo.UserID WHERE ua.UserID = u.UserID ORDER BY uo.AddressOrder ASC) q ORDER BY AddressOrder DESC) FROM User u In this scenario, users have multiple address definitions, with an integer field specifying the preferred order. "Address2" (the second preferred address) attempts to take the top two preferred addresses, order them descending, then take the top one from the result. You might say, just use a subquery which does a SELECT for the record with "2" in the Order field, but the Order values are not contiguous. How can this be rewritten to conform to SQL 2000's limitations? Very much appreciated.

    Read the article

  • RIA Services Filter descriptor

    - by Mohit
    I have a Filterdescriptor as shown below. The propertypath is of type 'char?' I get following InvalidOperationException when I filter by entering a value Y InnerException {System.InvalidOperationException: A FilterDescriptor with its PropertyPath equal to 'Valid' cannot be evaluated. --- System.ArgumentException: Operator 'StartsWith' incompatible with operand types 'Char?' and 'Char?' --- System.ArgumentNullException: Value cannot be null. Parameter name: method at System.Linq.Expressions.Expression.ValidateCallArgs(Expression instance, MethodInfo method, ReadOnlyCollection1& arguments) at System.Linq.Expressions.Expression.Call(Expression instance, MethodInfo method, IEnumerable1 arguments) at System.Linq.Expressions.Expression.Call(Expression instance, MethodInfo method, Expression[] arguments) at System.Windows.Controls.LinqHelper.GenerateMethodCall(String methodName, Expression left, Expression right) at System.Windows.Controls.LinqHelper.GenerateStartsWith(Expression left, Expression right) at System.Windows.Controls.LinqHelper.BuildFilterExpression(Expression propertyExpression, FilterOperator filterOperator, Expression valueExpression, Boolean isCaseSensitive, Expression& filterExpression) --- End of inner exception stack trace --- --- End of inner exception stack trace ---} System.Exception {System.InvalidOperationException}

    Read the article

  • .Net Intermittent System.Web.Services.Protocols.SoapHeaderException

    - by ScottE
    We have a .net 3.5 web app that consumes third party web services. The proxy was created by adding a web reference to their wsdl. This proxy is not compiled. Our error logging is picking up frequent but intermittent exceptions: An exception of type 'System.Web.Services.Protocols.SoapHeaderException' occurred and was caught If I follow the url to the page that generated the exception, I can't recreate it. Edit: Here is most of the exception - where it bubbled up from Message : Internal Error Type : System.Web.Services.Protocols.SoapHeaderException, System.Web.Services, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Source : System.Web.Services Help link : Actor : Code : http://schemas.xmlsoap.org/soap/envelope/:Client Detail : Lang : Node : Role : SubCode : Data : System.Collections.ListDictionaryInternal TargetSite : System.Object[] ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) Stack Trace : at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Vendor.getSearch(getSearchRequest getSearchRequest) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\be43c34e\b09edc7e\App_WebReferences.pww-cf-q.0.cs:line 73 Edit 2: Inner exceptions: I sometimes get the following inner exceptions logged: Message : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. Type : System.IO.IOException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Read(Byte[], Int32, Int32) Stack Trace : at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) And/Or: Message : An existing connection was forcibly closed by the remote host Type : System.Net.Sockets.SocketException, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : ErrorCode : 10054 SocketErrorCode : ConnectionReset NativeErrorCode : 10054 Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) Stack Trace : at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) Update We're still working on it. Originally there was a route issue, which was resolved. We're still getting the inner exception with socket errors. We had MS support involved today, and they looked at some traces and network captures. The web service host does round-robin DNS, and they may be responding on a different IP address for the syn syn/ack from one ip, and the next from a different ip. This is not good. This is likely quite specific to our situation, but perhaps it applies to others as well. Microsoft Network Monitor and an application trace got us the information we needed.

    Read the article

  • Asterisk - Trying to use call files to create a conference call between two dynamic numbers

    - by Hank
    I'm trying to setup an Asterisk system that will allow me to create a conference call between two dynamic numbers. It seems I can use 'call files' to make Asterisk initiate the call without needing an incoming call - http://www.voip-info.org/tiki-index.php?page=Asterisk+auto-dial+out This example seems to be what I'd need: Channel: SIP/mytrunk/12345678 MaxRetries: 2 RetryTime: 60 WaitTime: 30 Context: callme Extension: 800 Priority: 2 I can generate this file with some scripting language and then place it into the Asterisk Call File folder. The problem I'm having is: How do I call out to two numbers and join them in a conference call? The MeetMe plugin/extension seems to be what I need in terms of conference calling, I'm just unsure as to how I'd use the two together and join them. Also, is it possible to have multiple 2-person conference calls at the same time? Is setting this up as simple as setting aside X amount of 'channels' in the meetme.conf?

    Read the article

  • onload script does not work in subview page in JSF

    - by jackrobert
    Hi, Here i write two jsp page like outerPage.jsp and innerPage.jsp The outer page include innerPage.jsp The inner page have one textfield and one button.. I need focus for textFiled while page loading(innerPage.jsp).. I write a javascript, but not work it... The code is outerPage.jsp <%@page contentType="text/html" pageEncoding="UTF-8"% <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" % <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" % <%@ taglib uri="http://richfaces.org/a4j" prefix="a4j" % <%@ taglib uri="http://richfaces.org/rich" prefix="rich"% <f:view> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Outer Viewer</title> <meta name="description" content="Outer Viewer" /> </head> <body id="outerMainBody"> <rich:page id="richPage"> <rich:layout> <rich:layoutPanel position="center" width="100*"> <a4j:outputPanel> <f:verbatim><table style="padding: 5px;"><tbody><tr> <td> <jsp:include page="innerPage.jsp" flush="true"/> </td> </tr></tbody></table></f:verbatim> </a4j:outputPanel> </rich:layoutPanel> </rich:layout> </rich:page> </body> </f:view> innerPage.jsp <%@page contentType="text/html" pageEncoding="UTF-8"% <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" % <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" % <%@ taglib uri="http://richfaces.org/a4j" prefix="a4j" % <%@ taglib uri="http://richfaces.org/rich" prefix="rich"% <f:subview id="innerViewerSubviewId"> <f:verbatim><head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Inner Viewer </title> <script type="text/javascript"> //This script does not called during the page loading (onload) function cursorFocus() { alert("Cursor Focuse Method called..."); document.getElementById("innerViewerForm:innerNameField").focus(); alert("Cursor Focuse method end!!!"); } </script> </head> <body onload="cursorFocus();"></f:verbatim> <h:form id="innerViewerForm"> <rich:panel id="innerViewerRichPanel"> <f:facet name="header"> <h:outputText value="Inner Viewer" /> </f:facet> <a4j:outputPanel id="innerViewerOutputPanel" > <h:panelGrid id="innerViewerSearchGrid" columns="2" cellpadding="3" cellspacing="3"> //<%-- Row 1 For Text Field --%> <h:outputText value="inner Name : " /> <h:inputText id="innerNameField" value=""/> //<%-- Row 2 For Test Button --%> <h:outputText value="" /> <h:commandButton value="TestButton" action="test" /> </h:panelGrid> </a4j:outputPanel> </rich:panel> </h:form> <f:verbatim></body></f:verbatim> </f:subview> <f:verbatim></html></f:verbatim> The cursorFocus script does not called... Here i need cursor focus for textFiled after display the page ... Thanks in advance.

    Read the article

  • Cucumber can't find installed gems

    - by artemave
    environment/cucumber.rb: ... # gem dependencies config.gem 'cucumber-rails', :lib => false, :version => '>=0.3.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'database_cleaner', :lib => false, :version => '>=0.5.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'webrat', :lib => false, :version => '>=0.7.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'spork', :lib => false, :version => '>=0.7.5' unless File.directory?(File.join(Rails.root, 'vend config.gem 'factory_girl', :source => 'http://gemcutter.org' config.gem 'selenium-client', :lib => false config.gem 'Selenium', :lib => false config.gem 'rspec', :lib => 'spec' config.gem 'rspec-rails', :lib => 'spec/rails' config.gem 'test-unit', :lib => false Running cucumber gives missing gems error: artem:~/projects/food4feed (master)$ cucumber ... no such file to load -- test-unit /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/gem_dependency.rb:208:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `block in load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:169:in `process' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:113:in `run' /home/artem/projects/food4feed/config/environment.rb:9:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/projects/food4feed/features/support/env.rb:12:in `block in <top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/spork-0.8.1/lib/spork.rb:23:in `prefork' /home/artem/projects/food4feed/features/support/env.rb:10:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:85:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:77:in `block in load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:48:in `execute!' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:20:in `execute' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/bin/cucumber:8:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `<main>' Missing these required gems: selenium-client Selenium rspec-rails test-unit You're running: ruby 1.9.1.378 at /home/artem/.rvm/rubies/ruby-1.9.1-p378/bin/ruby rubygems 1.3.5 at /home/artem/.rvm/gems/ruby-1.9.1-p378, /home/artem/.rvm/gems/ruby-1.9.1-p378%global All gems are obviously there: artem:~/projects/food4feed (master)$ gem list | egrep "elenium|rspec|test-unit" rspec (1.3.0) rspec-rails (1.3.2) Selenium (1.1.14) selenium-client (1.2.18) test-unit (2.0.7) The even more confusing part is that it only complains about certain gems. factory_girl and rspec don't cause problems. Any idea what is going on? My environment: Rails 2.3.5 cucumber (0.6.3) cucumber-rails (0.3.0)

    Read the article

  • What is syncobj in SQL Server

    - by hgulyan
    Hi. I run this script to search particular text in sys.columns and I get a lot of "dbo.syncobj_0x3934443438443332" this kind of result. SELECT c.name, s.name + '.' + o.name FROM sys.columns c INNER JOIN sys.objects o ON c.object_id=o.object_id INNER JOIN sys.schemas s ON o.schema_id=s.schema_id WHERE c.name LIKE '%text%' If I get it right, they are replication objects. Is it so? Can i just throw them away from my query just like o.name NOT LIKE '%syncobj%' or there's another way? Thank you.

    Read the article

  • Network issues with DNS not being found

    - by Anriëtte Combrink
    Hi there This is exactly like how our network looks like: Single server with a network router Everything is setup, but I cannot connect our Macs under the Login Options - Join... to this server. Our server's name is Toolbox and I have tried Toolbox.local, Toolbox.private, prepended the afp:// protocol to the name, but nothing, our Macs just don't want to connect this way. Our router has DHCP and gives out all the IP addresses naturally, would I have to add Toolbox.local to the DNS on the router and like it via static internal IP to the server? Our Macs keep giving the following error while trying to join the Network Account Server: Unable to add server Could not resolve the address (2200) What am I doing wrong?

    Read the article

  • Error in joining 4 AVI files.

    - by goldenmean
    Hello, I have 4 avi files. Each file can be played properly in VLC player, Windows media player. Video-Audio Codec type in each of this avi file is Xvid-Mpga. I used below avi joiners: Quick AVI joiner, AVI join, BoilSoft Video Joiner(trial version), but all of them gave an error when i select the first part avi file saying : "format of the part1.avi file cannot be recognized".. What could be the problem. How do i join these AVI files. Any pointers will help. Thanks. -AD.

    Read the article

  • Asterisk - Trying to use call files to create a conference call between two dynamic numbers

    - by Hank
    I'm trying to setup an Asterisk system that will allow me to create a conference call between two dynamic numbers. It seems I can use 'call files' to make Asterisk initiate the call without needing an incoming call - http://www.voip-info.org/tiki-index.php?page=Asterisk+auto-dial+out This example seems to be what I'd need: Channel: SIP/mytrunk/12345678 MaxRetries: 2 RetryTime: 60 WaitTime: 30 Context: callme Extension: 800 Priority: 2 I can generate this file with some scripting language and then place it into the Asterisk Call File folder. The problem I'm having is: How do I call out to two numbers and join them in a conference call? The MeetMe plugin/extension seems to be what I need in terms of conference calling, I'm just unsure as to how I'd use the two together and join them. Also, is it possible to have multiple 2-person conference calls at the same time? Is setting this up as simple as setting aside X amount of 'channels' in the meetme.conf?

    Read the article

  • Solving Slow Query

    - by Chris
    We are installing a new forum (yaf) for our site. One of the stored procedures is extremely slow - in fact it always times out in the browser. If I run it in MSSMS it takes nearly 10 minutes to complete. Is there a way to find out what part of this query if taking so long? The Query: DECLARE @BoardID int DECLARE @UserID int DECLARE @CategoryID int = null DECLARE @ParentID int = null SET @BoardID = 1 SET @UserID = 2 select a.CategoryID, Category = a.Name, ForumID = b.ForumID, Forum = b.Name, Description, Topics = [dbo].[yaf_forum_topics](b.ForumID), Posts = [dbo].[yaf_forum_posts](b.ForumID), Subforums = [dbo].[yaf_forum_subforums](b.ForumID, @UserID), LastPosted = t.LastPosted, LastMessageID = t.LastMessageID, LastUserID = t.LastUserID, LastUser = IsNull(t.LastUserName,(select Name from [dbo].[yaf_User] x where x.UserID=t.LastUserID)), LastTopicID = t.TopicID, LastTopicName = t.Topic, b.Flags, Viewing = (select count(1) from [dbo].[yaf_Active] x JOIN [dbo].[yaf_User] usr ON x.UserID = usr.UserID where x.ForumID=b.ForumID AND usr.IsActiveExcluded = 0), b.RemoteURL, x.ReadAccess from [dbo].[yaf_Category] a join [dbo].[yaf_Forum] b on b.CategoryID=a.CategoryID join [dbo].[yaf_vaccess] x on x.ForumID=b.ForumID left outer join [dbo].[yaf_Topic] t ON t.TopicID = [dbo].[yaf_forum_lasttopic](b.ForumID,@UserID,b.LastTopicID,b.LastPosted) where a.BoardID = @BoardID and ((b.Flags & 2)=0 or x.ReadAccess<>0) and (@CategoryID is null or a.CategoryID=@CategoryID) and ((@ParentID is null and b.ParentID is null) or b.ParentID=@ParentID) and x.UserID = @UserID order by a.SortOrder, b.SortOrder IO Statistics: Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Active'. Scan count 14, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_User'. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Topic'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Category'. Scan count 0, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Forum'. Scan count 0, logical reads 488, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserGroup'. Scan count 231, logical reads 693, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_ForumAccess'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_AccessMask'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserForum'. Scan count 1, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Client Statistics: Client Execution Time 11:54:01 Query Profile Statistics Number of INSERT, DELETE and UPDATE statements 0 0.0000 Rows affected by INSERT, DELETE, or UPDATE statements 0 0.0000 Number of SELECT statements 8 8.0000 Rows returned by SELECT statements 19 19.0000 Number of transactions 0 0.0000 Network Statistics Number of server roundtrips 3 3.0000 TDS packets sent from client 3 3.0000 TDS packets received from server 34 34.0000 Bytes sent from client 3166 3166.0000 Bytes received from server 128802 128802.0000 Time Statistics Client processing time 156478 156478.0000 Total execution time 572009 572009.0000 Wait time on server replies 415531 415531.0000 Execution Plan

    Read the article

  • Co-Authors Wordpress Plugin: coauthors_wp_list_authors function not working correctly

    - by rayne
    The Co-Authors Plus Plugin for Wordpress has a very annoying bug. The custom function coauthors_wp_list_authors lists authors the same way the wordpress function wp_list_authors does, but it does not include authors in the list who don't have a post of their own - if they have only entries in which they are listed as co-author but not as author, they will not be included in the list. That is of course missing a very important point. I've looked at the faulty SQL statement, but unfortunately my knowledge of advanced SQL, especially when it comes to JOINs, as well as my knowledge of the wp database structure is too limited and I remain clueless. There is a topic in the WP support forum, but unfortunately the information there is very outdated and the fix is not applicable anymore. I couldn't find any other, more current solutions on the internet. I'd be glad if somewhere here could help fix the SQL statement so it also lists co-authors who don't have posts where they're the sole author, as well as display the correct post count for all authors. Here's the entire function for reference purposes with a comment marking the SQL statement: function coauthors_wp_list_authors($args = '') { global $wpdb, $coauthors_plus; $defaults = array( 'optioncount' => false, 'exclude_admin' => true, 'show_fullname' => false, 'hide_empty' => true, 'feed' => '', 'feed_image' => '', 'feed_type' => '', 'echo' => true, 'style' => 'list', 'html' => true ); $r = wp_parse_args( $args, $defaults ); extract($r, EXTR_SKIP); $return = ''; $authors = $wpdb->get_results("SELECT ID, user_nicename from $wpdb->users " . ($exclude_admin ? "WHERE user_login <> 'admin' " : '') . "ORDER BY display_name"); $author_count = array(); # this is the SQL statement which doesn't work correctly: $query = "SELECT DISTINCT $wpdb->users.ID AS post_author, $wpdb->terms.name AS user_name, $wpdb->term_taxonomy.count AS count"; $query .= " FROM $wpdb->posts"; $query .= " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb->term_relationships.object_id)"; $query .= " INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; $query .= " INNER JOIN $wpdb->terms ON ($wpdb->term_taxonomy.term_id = $wpdb->terms.term_id)"; $query .= " INNER JOIN $wpdb->users ON ($wpdb->terms.name = $wpdb->users.user_login)"; $query .= " WHERE post_type = 'post' AND " . get_private_posts_cap_sql( 'post' ); $query .= " AND $wpdb->term_taxonomy.taxonomy = '$coauthors_plus->coauthor_taxonomy'"; $query .= " GROUP BY post_author"; foreach ((array) $wpdb->get_results($query) as $row) { $author_count[$row->post_author] = $row->count; } foreach ( (array) $authors as $author ) { $link = ''; $author = get_userdata( $author->ID ); $posts = (isset($author_count[$author->ID])) ? $author_count[$author->ID] : 0; $name = $author->display_name; if ( $show_fullname && ($author->first_name != '' && $author->last_name != '') ) $name = "$author->first_name $author->last_name"; if( !$html ) { if ( $posts == 0 ) { if ( ! $hide_empty ) $return .= $name . ', '; } else $return .= $name . ', '; continue; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= '<li>'; if ( $posts == 0 ) { if ( ! $hide_empty ) $link = $name; } else { $link = '<a href="' . get_author_posts_url($author->ID, $author->user_nicename) . '" title="' . esc_attr( sprintf(__("Posts by %s", 'co-authors-plus'), $author->display_name) ) . '">' . $name . '</a>'; if ( (! empty($feed_image)) || (! empty($feed)) ) { $link .= ' '; if (empty($feed_image)) $link .= '('; $link .= '<a href="' . get_author_feed_link($author->ID) . '"'; if ( !empty($feed) ) { $title = ' title="' . esc_attr($feed) . '"'; $alt = ' alt="' . esc_attr($feed) . '"'; $name = $feed; $link .= $title; } $link .= '>'; if ( !empty($feed_image) ) $link .= "<img src=\"" . esc_url($feed_image) . "\" style=\"border: none;\"$alt$title" . ' />'; else $link .= $name; $link .= '</a>'; if ( empty($feed_image) ) $link .= ')'; } if ( $optioncount ) $link .= ' ('. $posts . ')'; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= $link . '</li>'; else if ( ! $hide_empty ) $return .= $link . ', '; } $return = trim($return, ', '); if ( ! $echo ) return $return; echo $return; }

    Read the article

  • FTP OVER SSL - Invalid Token Error

    - by crazsmith
    I am trying to implement FTP over SSL to upload encrypted files. I've created a SSL certificate and send it to the vendor. But I couldn't make a FTPS connection to the server. When connecting via FTPS, I'm authenticating using my private key file. I have tried .NET FTPWebRequest, SmartFTp,CuteFTP Pro. I am getting the following error:- A call to SSPI failed. See inner exception. The inner exception is "The token supplied to the function is invalid" FtpWebRequest request = (FtpWebRequest)FtpWebRequest.Create("ftp://RemoteHost.Com"); request.Credentials = new NetworkCredential("UserName", "Password"); request.KeepAlive = false; request.EnableSsl = true; X509Certificate2 cert2 = new X509Certificate2("PrivateKeyFile.pfx", "password"); request.ClientCertificates.Add(cert2); FtpWebResponse response = (FtpWebResponse)request.GetResponse(); Any Help Appreciated. Thanks.

    Read the article

  • How to only round selected corners in a fancytitle box with Tikz

    - by Christian Jonassen
    If you take a look at http://www.texample.net/tikz/examples/boxes-with-text-and-math/ the boxes there are with rounded corners. In the examples, both the box itself and the title is a box. I want the title box to not have the bottom corners rounded. On page 120 in the manual, there is a description of how to draw with and without rounded corners. However, I want to use this in a fancytitle. It looks a bit silly to have the fancytitle as a box where all corners are rounded when it is as wide as the box itself. \begin{tikzpicture}[baseline=-2cm] \node [mybox] (box){ \begin{minipage}[t!]{0.50\textwidth} Help, I'm a box \end{minipage} }; \node[fancytitle, text width=0.5423\textwidth, text centered, rounded corners] at (box.north) {Help, I'm a title}; \end{tikzpicture} The style I use is this \tikzstyle{mybox} = [draw=red, fill=blue!20, very thick, rectangle, rounded corners, inner sep=10pt, inner ysep=20pt] \tikzstyle{fancytitle} = [fill=red, text=white]

    Read the article

  • Free MP3 merge for Mac OS X

    - by Lilly
    Hi, I need to merge several MP3 tracks into one, I use Mac OS X 10.5. I want to convert all my Harry Potter CDs to my iPod, but not every minute a new track (as it is on the CDs) but chapterwise. Where can I get a free software? Help, please! (I've already tried: Jfuse, but after I had merged a few chapters it said I had to buy it; emicsoft VOB Converter for Mac; File Stitcher; but since they all were shareware, for free they would only let me merge 2 files at once (that would take me days) or half of each file which is useless of course; iTunes advanced settings ("join CD tracks") when importing the CDs, but it would let me only join the complete CD, not chapters...) (Sorry for my English, hope you could understand what I wanted to say)

    Read the article

  • Free mp3 merge for Mac OSX

    - by Lilly
    Hi, I need to merge several mp3 tracks into one, I use MAC OS X 10.5. I want to convert all my Harry Potter CDs to my iPod, but not every minute a new track (as it is on the CDs) but chapterwise. Where can I get a free software? Help, please! (I've already tried: Jfuse, but after I had merged a few chapters it said I had to buy it; emicsoft VOB Converter for MAC; File Stitcher; but since they all were shareware, for free they would only let me merge 2 files at once (that would take me days) or half of each file which is useless of course; iTunes advanced settings ("join CD tracks") when importing the CDs, but it would let me only join the complete CD, not chapters...) (Sorry for my English, hope you could understand what I wanted to say)

    Read the article

  • Win Server 2008 force kerberos setting

    - by ftiaronsem
    I am currently facing the problem that a linux machine running Ubuntu 10.04 LTS with samba and winbindd installed is unable to join a Domain, that is managed by a Windows 2008 DC. The linux config, is probably alright, since I have successfully used it at multiple sites, running 2008 as well as 2003 DCs. The error I get ("libads/kerberos.c: Join to domain is not valid. Client credentials have been revoked"), indicates that there is a kerberos problem. Normally the linux box is supposed to authenticate via NTLM and is configured that way. The only reason I can image why it tries kerberos is that the DC is forcing it. Do you know whether there is any setting in the security policies of a window 2008 server, that would completely block NTLM, forcing kerberos? If so, where can I find this setting?

    Read the article

  • Constructing dynamic columns from parameters in Sybase

    - by Chapax
    Hi, I'm trying to write a stored proc (SP) in Sybase. The SP takes 5 varchar parameters. Based on the parameters passed, I want to construct the column names to be selected from a particular table. The below works: DECLARE @TEST VARCHAR(50) SELECT @TEST = "country" --print @TEST execute("SELECT DISTINCT id_country AS id_level, Country AS nm_level FROM tempdb..tbl_books INNER JOIN (tbl_ch2_bespoke_report INNER JOIN tbl_ch2_bespoke_rpt_mapping ON tbl_ch2_bespoke_report.id_report = tbl_ch2_bespoke_rpt_mapping.id_report) ON id_" + @TEST + "= tbl_ch2_bespoke_rpt_mapping.id_pnl_level WHERE tbl_ch2_bespoke_report.id_report = 14") but gives me multiple results: 1 1 row(s) affected. id_level nm_level 1 4376 XYZ 2 4340 ABC I would like to however only obtain the 2nd result. Do I need to necessarily use dynamic SQL to achieve this? Many thanks for your help. --Chapax

    Read the article

  • Scala: How to combine parser combinators from different objects

    - by eed3si9n
    Given a family of objects that implement parser combinators, how do I combine the parsers? Since Parsers.Parser is an inner class, and in Scala inner classes are bound to the outer object, the story becomes slightly complicated. Here's an example that attempts to combine two parsers from different objects. import scala.util.parsing.combinator._ class BinaryParser extends JavaTokenParsers { def anyrep: Parser[Any] = rep(any) def any: Parser[Any] = zero | one def zero: Parser[Any] = "0" def one: Parser[Any] = "1" } object LongChainParser extends BinaryParser { def parser1: Parser[Any] = zero~zero~one~one } object ShortChainParser extends BinaryParser { def parser2: Parser[Any] = zero~zero } object ExampleParser extends BinaryParser { def parser: Parser[Any] = (LongChainParser.parser1 ||| ShortChainParser.parser2) ~ anyrep def main(args: Array[String]) { println(parseAll(parser, args(0) )) } } This results to the following error: <console>:11: error: type mismatch; found : ShortChainParser.Parser[Any] required: LongChainParser.Parser[?] def parser: Parser[Any] = (LongChainParser.parser1 ||| ShortChainParser.parser2) ~ anyrep I've found the solution to this problem already, but since it was brought up recently on scala-user ML (Problem injecting one parser into another), it's probably worth putting it here too.

    Read the article

  • Temporarily disable an AD server

    - by 3molo
    Topology and setup We have main office A, and branch office (abroad) B. Our ISP somehow messed up the MPLS, and office A<B will not be connected until a few days. At location B, we have an AD (and the other two ADs at location A). Location A also have an exchange server. The problems A few users at A have problem to login to their computers running Windows XP, the logon process kind of hangs where "Applying computer policies". Additionally, I can't start the Exchange management shell, it fails on get-recipient because the AD abroad (location B) is unreachable. Solution? I could delete the AD at B, but Im pretty sure it will be a hazzle to re-join it, and since the office is abroad it's not an option to just go there and re-install it - re join, I now wonder how I in location As primary and secondary ADs can temporarily disable AD at location B.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >