Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 90/1640 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Jquery Mobile app focus-based navigation stops working after switching between pages

    - by nawar
    As much as I would like to expand on the details here, I am not able to find relevant information about the root cause of this problem. I am having this issue with my blackberry Webapp which I built using JQM. After few times of navigation from page to page, the application becomes unresponsive on the destination page and I am not able to scroll up/down using the touchpad. If someone had this problem or some clue to the resolution, then that would be helpful. Edit: after doing some research I was able to narrow down the cause of the issue. I am having an issue with focus-based navigation. As I lose focus on the page elements (buttons, input fields, etc) after few transitions among the pages. Edit I had to switch back to the cursor based navigation as it is much faster and do not have the issue faced by focus-based navigation. I removed the entry: <rim:navigation mode=”focus”/> from the config.xml file I found this entry on the blackberry fourms but it haven't solved my problem despite the fact I upgraded my WebWorks SDK to 2.0 from 1.5 http://supportforums.blackberry.com/t5/Web-and-WebWorks-Development/Focus-based-navigation-hangs-device/td-p/455600 Thanks

    Read the article

  • How to remove duplicate records in a table?

    - by Mason Wheeler
    I've got a table in a testing DB that someone apparently got a little too trigger-happy on when running INSERT scripts to set it up. The schema looks like this: ID UNIQUEIDENTIFIER TYPE_INT SMALLINT SYSTEM_VALUE SMALLINT NAME VARCHAR MAPPED_VALUE VARCHAR It's supposed to have a few dozen rows. It has about 200,000, most of which are duplicates in which TYPE_INT, SYSTEM_VALUE, NAME and MAPPED_VALUE are all identical and ID is not. Now, I could probably make a script to clean this up that creates a temporary table in memory, uses INSERT .. SELECT DISTINCT to grab all the unique values, TRUNCATE the original table and then copy everything back. But is there a simpler way to do it, like a DELETE query with something special in the WHERE clause?

    Read the article

  • IE6 is duplicating characters, can't figure out why. Suggestions?

    - by Paul
    Problem is located on http://www.preownedweddingdresses.com/ We have a dresses slider at the bottom, select tabs different dresses shown. Works fine everywhere else, but for some reason, in IE6, the letters "ls" (from the tab "Best Deals") are duplicating inside the content and causing rendering issues. I've yet to find anything that can fix this, or anything that can be blamed for causing this either. I've changed the letters at the end of Best Deals, and the duplicated letters change as well. Open to any suggestions.

    Read the article

  • Do I have to duplicate this function? - jQuery

    - by Josh
    I'm using this function to create an transparent overlay of information over the current div for a web-based mobile app. Background: using jQTouch, so I have separate divs, not individual pages loading new. $(document).ready(function() { $('.infoBtn').click(function() { $('#overlay').toggleFade(400); return false; }); }); Understanding that JS will run sequentially when i click the button on the first div the function works fine. When I go to the next div if I click the same button nothing "happens" when this div is displayed, but if i go back to the first div it has actually triggered it on this page. So I logically duplicated the function and changed the CSS selector names and it works for both. But do I have to do this for each use? Is there a way to use the same selectors, but load the different content in each variation?

    Read the article

  • Find duplicates lines based on some delimited fileds on line

    - by Oliv
    Hello, I have a file with lines having some fields delimited by "|". I have to extract the lines that are identical based on some of the fileds (i.e. find lines which contain the same values for fields 1,2,3 12,and 13) Other fields contents have no importance for searching but the whole extracted lines have to be complete. Can anyone tell me how I can do that in KSH scripting (By exemple a script with some arguments (order dependent) that define the fileds separator and the fields which have to be compared to find duplicates lines in input file ) Thanks in advance and kind regards Oli

    Read the article

  • is there a better way of replacing duplicates in a list (python)

    - by myeu2
    Given a list: l1: ['a', 'b', 'c', 'a', 'a', 'b'] output: ['a', 'b', 'c', 'a'_1, 'a'_2, 'b'_1 ] I created the following code to get the output. Its messyyy.. for index in range(len(l1)): counter = 1 list_of_duplicates_for_item = [dup_index for dup_index, item in enumerate(l1) if item == l1[index] and l1.count(l1[index]) > 1] for dup_index in list_of_duplicates_for_item[1:]: l1[dup_index] = l1[dup_index] + '_' + str(counter) counter = counter + 1 Is there a more pythonic way of doing this? I couldnt find anything on the web.

    Read the article

  • Database normalization and duplicate values

    - by bretddog
    Consider a Parent / Child / GrandChild structure in a database table schema, or even a deeper hierarchy. These being in the same aggregate. One table DAYS keeps a single row per day, and has a "Date" field. This is the root table, or maybe a child of the root. No row can ever be deleted in this table. In this case, however complex my table schema looks like, however far away in the hierarchy any other table is, is there any reason why any other table would hold a Date value? Can't it instead just have a FK to the DAYS table. I obviously assume that the creation of these date fields happen not before such datefield exist in the DAYS table. I'm now thinking just about the date part to be relevant, not the time part. Not sure if all databases can store these individually. That's maybe relevant, but not really the focus of the question.

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

  • Query to find duplicate item in 2 table

    - by Rico
    I have this table Antecedent Consequent I1 I2 I1 I1,I2,I3 I1 I4,I1,I3,I4 I1,I2 I1 I1,I2 I1,I4 I1,I2 I1,I3 I1,I4 I3,I2 I1,I2,I3 I1,I4 I1,I3,I4 I4 AS you can see it's pretty messed up. is there anyway i can remove rows if item in consequent exist in antecedent (in 1 row) for example: INPUT: Antecedent Consequent I1 I2 I1 I1,I2,I3 <---- DELETE since I1 exist in antecedent I1 I4,I1,I3,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1 <---- DELETE since I1 exist in antecedent I1,I2 I1,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1,I3 <---- DELETE since I1 exist in antecedent I1,I4 I3,I2 I1,I2,I3 I1,I4 <---- DELETE since I1 exist in antecedent I1,I3,I4 I4 <---- DELETE since I4 exist in antecedent OUTPUT: Antecedent Consequent I1 I2 I1,I4 I3,I2 is there anyway i can do that by query?

    Read the article

  • Sql Alchemy Duplicated Commit

    - by PythonWolf
    Good Morning i'm currently facing a problem in my Cherrypy application. Im my own custom session module , anyway when performing session.add() The exact same object gets updated Twice. cherrypy.request.SessionManager.user_data = user try: db_session.add(cherrypy.request.SessionManager) db_session.commit() Will Return 2011-06-21 09:16:48,991 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL SELECT ..... FROM "Clients_Users" WHERE "Clients_Users".username = %(username_1)s AND "Clients_Users".password = %(password_1)s LIMIT 1 OFFSET 0 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL {'password_1': '123', 'username_1': u'1'} 2011-06-21 09:16:49,047 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,067 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,071 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT 2011-06-21 09:16:49,093 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,108 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT As Anyone seen this before ? P.S This doesn't happen in the rest of the modules i have made.

    Read the article

  • eliminating duplicate Enum code

    - by Don
    Hi, I have a large number of Enums that implement this interface: /** * Interface for an enumeration, each element of which can be uniquely identified by it's code */ public interface CodableEnum { /** * Get the element with a particular code * @param code * @return */ public CodableEnum getByCode(String code); /** * Get the code that identifies an element of the enum * @return */ public String getCode(); } A typical example is: public enum IMType implements CodableEnum { MSN_MESSENGER("msn_messenger"), GOOGLE_TALK("google_talk"), SKYPE("skype"), YAHOO_MESSENGER("yahoo_messenger"); private final String code; IMType (String code) { this.code = code; } public String getCode() { return code; } public IMType getByCode(String code) { for (IMType e : IMType.values()) { if (e.getCode().equalsIgnoreCase(code)) { return e; } } } } As you can imagine these methods are virtually identical in all implementations of CodableEnum. I would like to eliminate this duplication, but frankly don't know how. I tried using a class such as the following: public abstract class DefaultCodableEnum implements CodableEnum { private final String code; DefaultCodableEnum(String code) { this.code = code; } public String getCode() { return this.code; } public abstract CodableEnum getByCode(String code); } But this turns out to be fairly useless because: An enum cannot extend a class Elements of an enum (SKYPE, GOOGLE_TALK, etc.) cannot extend a class I cannot provide a default implementation of getByCode(), because DefaultCodableEnum is not itself an Enum. I tried changing DefaultCodableEnum to extend java.lang.Enum, but this doesn't appear to be allowed. Any suggestions that do not rely on reflection? Thanks, Don

    Read the article

  • any ideas for avoiding duplicate code in C# and javascript

    - by ooo
    i have an asp.net mvc website where i build up most of the page using C# for example building up html tables given a set of data from my viewmodel i also have a lot of javascript that then dynamcially modifies these tables (add row for example). the javascript code to add a new row looks extremely similar to my "rendering" code i have in C# that is used to build up the html table in the first place. Every time i change the c# code to a add a new field, i have to remember to go back to the javascript code to do the same. is there a better way here?

    Read the article

  • T-SQL Getting duplicate rows returned

    - by cBlaine
    The following code section is returning multiple columns for a few records. SELECT a.ClientID,ltrim(rtrim(c.FirstName)) + ' ' + case when c.MiddleName <> '' then ltrim(rtrim(c.MiddleName)) + '. ' else '' end + ltrim(rtrim(c.LastName)) as ClientName, a.MISCode, b.Address, b.City, dbo.ClientGetEnrolledPrograms(CONVERT(int,a.ClientID)) as Abbreviation FROM ClientDetail a JOIN Address b on(a.PersonID = b.PersonID) JOIN Person c on(a.PersonID = c.PersonID) LEFT JOIN ProgramEnrollments d on(d.ClientID = a.ClientID and d.Status = 'Enrolled' and d.HistoricalPKID is null) LEFT JOIN Program e on(d.ProgramID = e.ProgramID and e.HistoricalPKID is null) WHERE a.MichiganWorksData=1 I've isolated the issue to the ProgramEnrollments table. This table holds one-to-many relationships where each ClientID can be enrolled in many programs. So for each program a client is enrolled in, there is a record in the table. The final result set is therefore returning a row for each row in the ProgramEnrollments table based on these joins. I presume my join is the issue but I don't see the problem. Thoughts/Suggestions? Thanks, Chuck

    Read the article

  • Refactor the following two C++ methods to move out duplicate code

    - by ossandcad
    I have the following two methods that (as you can see) are similar in most of its statements except for one (see below for details) unsigned int CSWX::getLineParameters(const SURFACE & surface, vector<double> & params) { VARIANT varParams; surface->getPlaneParams(varParams); // this is the line of code that is different SafeDoubleArray sdParams(varParams); for( int i = 0 ; i < sdParams.getSize() ; ++i ) { params.push_back(sdParams[i]); } if( params.size() > 0 ) return 0; return 1; } unsigned int CSWX::getPlaneParameters(const CURVE & curve, vector<double> & params) { VARIANT varParams; curve->get_LineParams(varParams); // this is the line of code that is different SafeDoubleArray sdParams(varParams); for( int i = 0 ; i < sdParams.getSize() ; ++i ) { params.push_back(sdParams[i]); } if( params.size() > 0 ) return 0; return 1; } Is there any technique that I can use to move the common lines of code of the two methods out to a separate method, that could be called from the two variations - OR - possibly combine the two methods to a single method? The following are the restrictions: The classes SURFACE and CURVE are from 3rd party libraries and hence unmodifiable. (If it helps they are both derived from IDispatch) There are even more similar classes (e.g. FACE) that could fit into this "template" (not C++ template, just the flow of lines of code) I know the following could (possibly?) be implemented as solutions but am really hoping there is a better solution: I could add a 3rd parameter to the 2 methods - e.g. an enum - that identifies the 1st parameter (e.g. enum::input_type_surface, enum::input_type_curve) I could pass in an IDispatch and try dynamic_cast< and test which cast is NON_NULL and do an if-else to call the right method (e.g. getPlaneParams() vs. get_LineParams()) The following is not a restriction but would be a requirement because of my teammates resistance: Not implement a new class that inherits from SURFACE/CURVE etc. (They would much prefer to solve it using the enum solution I stated above)

    Read the article

  • How to find the duplicate and highest value in an array

    - by Jerry
    Hello guys I have an array like this array={'a'=>'2','b'=>'5', 'c'=>'6', 'd'=>'6', 'e'=>'2'}; The array value might be different depending on the $_POST variables. My question is how to find the highest value in my array and return the index key. In my case, I need to get 'c' and 'd' and the value of 6. Not sure how to do this. Any helps would be appreciated. Thanks.

    Read the article

  • How to determine on which file system a file was created in Java

    - by rafrafUk
    Hi Everyone! I get files in different formats coming from different systems that I need to import into our database. Part of the import process it to check the line length to make sure the format is correct. We seem to be having issues with files coming from UNIX systems where one character is added. I suspect this is due to the return carriage being encoded differently on UNIX and windows platform. Is there a way to detect on which file system a file was created, other than checking the last character on the line? Or maybe a way of reading the files as text and not binary which I suspect is the issue? Thanks Guys !

    Read the article

  • Nhibernate linq group by duplicate fields

    - by jaspion
    Hello, I have a simple query like: from e in endContratoRepository.GetAll() where e.Contrato.idContrato == contrato.idContrato && e.Volume.idVolume == 1 && group e by new { e.dpto.idDepartamento, e.centroCusto.idCentroCusto } into grp select new Entidade { Contagem = grp.Count(), centroCusto = grp.Key.idCentroCusto, dpto = grp.Key.idDepartamento } and the problem: The fields that are in the group by, are duplicated in the generated query. The fields centroCusto and dpto appears twice, so when I try to get the field centroCusto, I get dpto. Anyone know how to solve this? thanks a lot!

    Read the article

  • Prevent log4net to send duplicate issues via SMTP

    - by jrEwing
    Hi, we have bridged our log4net with Jira using SMTP. Now we are worried that since the site is public what could happen to the Jira server if we get alot of issues in the production environment. We have already filtered on Critical and Fatal, but we want to see either some acumulator service on log4net or a plain filter which identifies repeating issues and prevents them from being sent via Email. Preferably without having to change the error reporting code, so a config solution would be best. I guess dumping the log into a db and then create a separate listener some smart code would be a (pricy) alternative.

    Read the article

  • Sorting multidimensional array on inner value php [duplicate]

    - by Silver89
    This question already has an answer here: Reference: all basic ways to sort arrays and data in PHP 4 answers Say I have the following array, how can I sort it on sort_by? Array ( [10] => Array ( [Masthead_slide] => Array ( [id] => 1456464564 [sort_by] => 1 ) ) [6] => Array ( [Masthead_slide] => Array ( [id] => 645454 [sort_by] => 10 ) ) [7] => Array ( [Masthead_slide] => Array ( [id] => 4547 [sort_by] => 5 ) ) )

    Read the article

  • restrict duplicate rows in specific columns in mysql

    - by JPro
    I have a query like this : select testset, count(distinct results.TestCase) as runs, Sum(case when Verdict = "PASS" then 1 else 0 end) as pass, Sum(case when Verdict <> "PASS" then 1 else 0 end) as fails, Sum(case when latest_issue <> "NULL" then 1 else 0 end) as issues, Sum(case when latest_issue <> "NULL" and issue_type = "TC" then 1 else 0 end) as TC_issues from results join testcases on results.TestCase = testcases.TestCase where platform = "T1_PLATFORM" AND testcases.CaseType = "M2" and testcases.dummy <> "flag_1" group by testset order by results.TestCase The result set I get is : testset runs pass fails issues TC_issues T1 66 125 73 38 33 T2 18 19 16 16 15 T3 57 58 55 55 29 T4 52 43 12 0 0 T5 193 223 265 130 22 T6 23 12 11 0 0 My problem is, this is a result table which has testcases running multiple times. So, I am able to restrict the runs using the distinct TestCases but when I want the pass and fails, since I am using case I am unable to eliminate the duplicates. Is there any way to achieve what I want? any help please? thanks.

    Read the article

  • How can I open another tab in the browser from the Code Behind [duplicate]

    - by Daniel Powell
    This question already has an answer here: Response.Redirect to new window 18 answers I have an ASP.Net project that I am working on that involves opening another tab in the browser from the vb.net (code behind). I have tried to use the WebBrowser control to open the new tab as well as tried to use the Script manager however I have not been able to open the new tab. The button I am using is a custom button in a DevExpress GridView, and I am handling its click with this code: Private Sub ASPxGridView2_CustomButtonCallback(sender As Object, e As DevExpress.Web.ASPxGridView.ASPxGridViewCustomButtonCallbackEventArgs) Handles ASPxGridView2.CustomButtonCallback If e.ButtonID <> "customButton" Then Return End If Dim neededID = ASPxGridView2.GetRowValues(e.VisibleIndex, "target") Dim id = CStr(neededID) Dim url = ("targetPage.aspx" + "?ID=" + id) Dim wb As WebBrowser wb.Navigate(url,True) End Sub I'm not sure why this would not work, any suggestions as to how I should open a new browser tab with or without the WebBrowser works for this project. EDIT: I cannot use javascript with this ASPxGridview So I need an answer that doesn't involve modification to the javascript except through the vb.net code behind

    Read the article

  • Copy existing XML, duplicate element and modify

    - by Robert
    Hi, I have a tricky XSL problem at the moment. I need to copy the existing XML, copy a certain element (plus its child elements) and modify the value of two child-elements. The modifications are: divide value of the 'value' element by 110 and edit the value of the 'type' element from 'normal' to 'discount'. This is currently what I have: Current XML: <dataset> <data> <prices> <price> <value>50.00</value> <type>normal</type> </price> </prices> </data> </dataset> Expected result <dataset> <data> <prices> <price> <value>50.00</value> <type>normal</type> </price> <price> <value>45.00</value> <type>discount</type> </price> </prices> </data> </dataset> Any takers? I've gotten as far as copying the desired 'price' element using copy-of, but I'm stuck as to how to modify it next.

    Read the article

  • Bash - replacing targeted files with a specific file, whitespace in directory names

    - by Dispelwolf
    I have a large directory tree of files, and am using the following script to list and replace a searched-for name with a specific file. Problem is, I don't know how to write the createList() for-loop correctly to account for whitespace in a directory name. If all directories don't have spaces, it works fine. The output is a list of files, and then a list of "cp" commands, but reports directories with spaces in them as individual dirs. aindex=1 files=( null ) [ $# -eq 0 ] && { echo "Usage: $0 filename" ; exit 500; } createList(){ f=$(find . -iname "search.file" -print) for i in $f do files[$aindex]=$(echo "${i}") aindex=$( expr $aindex + 1 ) done } writeList() { for (( i=1; i<$aindex; i++ )) do echo "#$i : ${files[$i]}" done for (( i=1; i<$aindex; i++ )) do echo "/usr/bin/cp /cygdrive/c/testscript/TheCorrectFile.file ${files[$filenumber]}" done } createList writeList

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >