Search Results

Search found 7034 results on 282 pages for 'inverse match'.

Page 13/282 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • using javascript replace() to match the last occurance of a string

    - by Dave
    I'm building an 'add new row' function for product variations, and I'm struggling with the regex required to match the form attribute keys. So, I'm basically cloning rows, then incrementing the keys, like this (coffeescript): newrow = oldrow.find('select, input, textarea').each -> this.name = this.name.replace(/\[(\d+)\]/, (str, p1) -> "[" + (parseInt(p1, 10) + 1) + "]" ) this.id = this.id.replace(/\_(\d+)\_/, (str, p1) -> "_" + (parseInt(p1, 10) + 1) + "_" ) .end() This correctly increments a field with a name of product[variations][1][name], turning it into product[variations][2][name] BUT Each variation can have multiple options (eg, color can be red, blue, green), so I need to be able turn this product[variations][1][options][2][name] into product[variations][1][options][3][name], leaving the variation key alone. What regex do I need to match only the last occurrence of a key (the options key)?

    Read the article

  • Setting up Matcher for String phrase match in file

    - by randomCoder
    Having trouble figuring out how to match a phrase string to a phrase in file stream. The file I'm dealing with contains random words such as: 3 little pigs built houses and 1 little pig went to the market etc. for many lines Using "little pig" as my pattern and matcher.find() I can locate 2 matches: "little pig" and "little pigs". However, I only want it to match "little pig". What can I do? I thought about using matcher.lookingAt() but I wouldn't know how to set a proper region when I can't rely on the file string phrases I'm matching being on separate lines.

    Read the article

  • How to properly match the following message id format in a case statement

    - by hsatterwhite
    I'm trying to get this regex pattern working in a case statement to match a particular type of ID, which could be passed to the script. I need to match the exact number of alphanumeric characters with the dashes to differentiate this message id from anything else, which may be passed to this bash script. An example of the message id format: c7c3e910-c9d2-71e1-0999-0aec446b0000 #!/bin/bash until [ -z "$1" ] do case "$1" in "") echo "No value passed" ;; [a-z0-9]\{8\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{4\}-[a-z0-9]\{12\}) echo "Found message ID: $1" ;; *) echo "Server $1" ;; esac shift done

    Read the article

  • MySQL Full Text Search Boolean Mode Partial Match

    - by Rob
    I've found boolean mode of MySQL full text search useful, however there are a couple of things I can't seem to figure out how to achieve. For instance imagine I have a full text column containing the words "Steve's Javascript Tutorial - Part One". I would like to match this for each of the following searches: "tutorials", "javascript tutorials", "java", "java script", "script" Imagine that each of those searches is simply assigned to a variable in whatever language may be being used (I always use PHP). How could I modify this to make sure that Steve's article is returned on each of those searches? MATCH (article_title) AGAINST ('"+$variable+"*' IN BOOLEAN MODE)

    Read the article

  • strstr whole string match

    - by clay
    I'm trying to match the whole string and not just part of it. For instance, if the needle is 2, I would like to match just the string 2 and not 20, 02, or 22 or anything related. I'm using strstr as: #include <stdio.h> #include <string.h> int main(int argc, char *argv[]) { FILE *file; char l[BUFSIZ]; int linenumber = 1; char term[6] = "2"; file = fopen(argv[1], "r"); if(file != NULL) { while(fgets(l, sizeof(l), file)){ if(strstr(l, term) != NULL) { printf("Search Term Found at %d!\n", linenumber); } ++linenumber; } } else { perror(argv[1]); } fclose(file); return 0; }

    Read the article

  • .NET Regular Expressions - Shorter match

    - by Xavier
    Hi Guys, I have a question regarding .NET regular expressions and how it defines matches. I am writing: var regex = new Regex("<tr><td>1</td><td>(.+)</td><td>(.+)</td>"); if (regex.IsMatch(str)) { var groups = regex.Match(str).Groups; var matches = new List<string>(); for (int i = 1; i < groups.Count; i++) matches.Add(groups[i].Value); return matches; } What I want is get the content of the two following tags. Instead it returns: [0]: Cell 1</td><td>Cell 2</td>... [1]: Last row of the table Why is the first match taking </td> and the rest of the string instead of stopping at </td>?

    Read the article

  • Multiple LIKE, OR MySql Queries Match

    - by Codex73
    Search for: 'chemist' Problem: query which will match a string like 'onechemist' but not 'chemist'. SELECT id,name FROM `records` WHERE name LIKE '%". mysql_real_escape_string($q) ."%' This alternate try won't work: SELECT id,name FROM `records` WHERE name LIKE '%". mysql_real_escape_string($q) ."%' OR name LIKE '". mysql_real_escape_string($q) ."%' OR name LIKE '%". mysql_real_escape_string($q) ."' How could I compile the above into one single query that will match any field which has the string or optimize the query into a better expression?

    Read the article

  • How to match data between columns to do the comparasion

    - by NCC
    I do not really know how to explain this in a clear manner. Please see attached image I have a table with 4 different columns, 2 are identical to each other (NAME and QTY). The goal is to compare the differences between the QTY, however, in order to do it. I must: 1. sort the data 2. match the data item by item This is not a big deal with small table but with 10 thousand rows, it takes me a few days to do it. Pleas help me, I appreciate. My logic is: 1. Sorted the first two columns (NAME and QTY) 2. For each value of second two columns (NAME and QTY), check if it match with first two column. If true, the insert the value. 3. For values are not matched, insert to new rows with offset from the rows that are in first two columns but not in second two columns

    Read the article

  • F# match char values

    - by rwallace
    I'm trying to match an integer expression against character literals, and the compiler complains about type mismatch. let rec read file includepath = let ch = ref 0 let token = ref 0 use stream = File.OpenText file let readch() = ch := stream.Read() let lex() = match !ch with | '!' -> readch() | _ -> token := !ch ch has to be an int because that's what stream.Read returns in order to use -1 as end of file marker. If I replace '!' with int '!' it still doesn't work. What's the best way to do this?

    Read the article

  • RegExp to match fraction

    - by user3627265
    I'm trying to perform regex to match a fraction. The user will input a fraction eg., 1/4, 1 1/2 10/2 so on. I have tested this regex and it works, but the problem is when I type in 10, 20, 30, 40 so on It does not recognized these values. This is my regex As you can see, it first sorted out the integer and then the slash and lastly the integer after the slash. var check_zero_value = str1.match(/[1-9]\/[1-9]/g); if(!check_zero_value1) { return false; } Any idea on this?

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Characteristics, what's the inverse of (x*(x+1))/2? [closed]

    - by Valmond
    In my game you can spend points to upgrade characteristics. Each characteristic has a formula like: A) out = in : for one point spent, one pont gained (you spend 1 point on Force so your force goes from 5 to 6) B) out = last level (starting at 1) : so the first point spent earns you 1 point, the next point spent earns you an additional 2 and so on (+3,+4,+5...) C) The inverse of B) : You need to spend 1 point to earn one, then you need to spend 2 to earn another one and so on. I have already found the formula for calculating the actual level of B when points spent = x : charac = (x*(x+1))/2 But I'd like to know what the "reverse" version of B) (usable for C) is, ie. if I have spent x points, how many have I earned if 1 spent gives 1, 1+2=3 gives 2, 1+2+3=6 gives 3 and so on. I know I can just calculate the numbers but I'd like to have the formula because its neater and so that I can stick it in an excel sheet for example... Thanks! ps. I think I have nailed it down to something like charac = sqrt( x*m +k) but then I'm stuck doing number guessing for k and m and I feel I might be wrong anyway as I get close but never hits the spot.

    Read the article

  • Buggy Perl regular expression

    - by Tichomir Mitkov
    Hi, there I'm writing a program that has to get values from a file. In the file each line indicates an entity. Each entity has three values. For example: Value1 Value2 value3 I have a regular expresion to match them m/(.*?) (.*?) (.*?)/m; But it seems that the third value in never matched! The only way to match the third value is to add another value in the file and another "matching brackets" in the expresion. But this does not satisfy me. Thanks in Advance!

    Read the article

  • Regular Expression issue

    - by Christian Sciberras
    I have the following URL structure which I need to match to and get the particular id from: /group/subgroup/id-name In short, I need to translate a URL like the following: /Blue Products/Dark Blue/5-Blue_Jelly To: /?pagename=Blue Products&model=5 IMPORTANT: I don't need to match group, I already have group. Example code: <?php foreach($cats as $cat) $cmd->rewrite('/\/'.$cat.'\/unused\/(ID)-unused\//','/?pagename='.$cat.'&model=%ID%'); ?> Edit: This is the completed code: if($groups->count()){ $names=array(); foreach($groups->rows as $row) $names[]=preg_quote($row->group); $names=implode('|',$names); $regex='('.$names.')/([^/]+)/([0-9]{1,})-([^/]+)/?$'; CmsHost::cms()->rewrite_url($regex,'index.php?pagename=Products',true); }

    Read the article

  • Searching within an array of strings...

    - by SoLoGHoST
    Ok, I'm feeling retarded here, I have a string like so: $string = 'function module_testing{'; or it could be like this: $string = 'function module_testing'; And than I have an array of strings like so: $string_array = array('module_testing', 'another_function', 'and_another_function'); Now, is there some sort of preg_match that I can do to test if any of the $string_array values are found within the $string string at any given position? So in this situation, there would be a match. Or is there a better way to do this? I can't use in_array since it's not an exact match, and I'd rather not do a foreach loop on it if I can help it, since it's already in a while loop. Thanks :)

    Read the article

  • XSL match some but not all

    - by Willb
    I have a solution from an earlier post that was kindly provided by Dimitre Novatchev. <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:my="my:my"> <xsl:output method="xml" version="1.0" encoding="iso-8859-1" indent="yes"/> <xsl:key name="kPhysByName" match="KB_XMod_Modules" use="Physician"/> <xsl:template match="/"> <result> <xsl:apply-templates/> </result> </xsl:template> <xsl:template match="/*/*/*[starts-with(name(), 'InfBy')]"> <xsl:variable name="vCur" select="."/> <xsl:for-each select="document('doc2.xml')"> <xsl:variable name="vMod" select="key('kPhysByName', $vCur)"/> <xsl:copy> <items> <item> <label> <xsl:value-of select="$vMod/Physician"/> </label> <value> <xsl:value-of select="$vMod/XModID"/> </value> </item> </items> </xsl:copy> </xsl:for-each> </xsl:template> </xsl:stylesheet> I now need to use additional fields in my source XML and need the existing labels intact but I'm having problems getting this going. <instance> <NewTag>Hello</newTag1> <AnotherNewTag>Everyone</AnotherNewTag> <InfBy1>Dr Phibes</InfBy1> <InfBy2>Dr X</InfBy2> <InfBy3>Dr Chivago</InfBy3> </instance> It drops the additional labels and outputs <result xmlns:my="my:my"> HelloEveryone <items> <item> <label>Dr Phibes</label> <value>60</value> </item> </items> ... I've been experimenting a lot with <xsl:otherwise> <xsl:copy-of select="."> </xsl:copy-of> </xsl:otherwise> but being an xsl newbie I can't seem to get this to work. I've a feeling I'm barking up the wrong tree! Does anyone have any ideas? Thanks, Will

    Read the article

  • php regex to split invoice line item description

    - by user1053700
    I am attempting to split strings like the following: An item (Item A) which may contain 89798 numbers and letters @ $550.00 4 of Item B @ $420.00 476584 of Item C, with a larger quantity and different currency symbol @ £420.00 into: array( 0 => 1 1 => "some item which may contain 89798 numbers and letters" 2 => $550.00 ); does that make sense? I am looking for a regex pattern which will split the quantity, description, and price (including symbol). the strings will always be: qty x description @ price+symbol so i assume the regex would be something like: `(match a number and only a number) x (get description letters and numbers before the @ symbol) @ (match the currency symbol and price)` How should I approach this?

    Read the article

  • How to get all captures of subgroup matches with preg_match_all()?

    - by hakre
    Update/Note: I think what I'm probably looking for is to get the captures of a group in PHP. Referenced: PCRE regular expressions using named pattern subroutines. (Read carefully:) I have a string that contains a variable number of segments (simplified): $subject = 'AA BB DD '; // could be 'AA BB DD CC EE ' as well I would like now to match the segments and return them via the matches array: $pattern = '/^(([a-z]+) )+$/i'; $result = preg_match_all($pattern, $subject, $matches); This will only return the last match for the capture group 2: DD. Is there a way that I can retrieve all subpattern captures (AA, BB, DD) with one regex execution? Isn't preg_match_all suitable for this? This question is a generalization. Both the $subject and $pattern are simplified. Naturally with such the general list of AA, BB, .. is much more easy to extract with other functions (e.g. explode) or with a variation of the $pattern. But I'm specifically asking how to return all of the subgroup matches with the preg_...-family of functions. For a real life case imagine you have multiple (nested) level of a variant amount of subpattern matches. Example This is an example in pseudo code to describe a bit of the background. Imagine the following: Regular definitions of tokens: CHARS := [a-z]+ PUNCT := [.,!?] WS := [ ] $subject get's tokenized based on these. The tokenization is stored inside an array of tokens (type, offset, ...). That array is then transformed into a string, containing one character per token: CHARS -> "c" PUNCT -> "p" WS -> "s" So that it's now possible to run regular expressions based on tokens (and not character classes etc.) on the token stream string index. E.g. regex: (cs)?cp to express one or more group of chars followed by a punctuation. As I now can express self-defined tokens as regex, the next step was to build the grammar. This is only an example, this is sort of ABNF style: words = word | (word space)+ word word = CHARS+ space = WS punctuation = PUNCT If I now compile the grammar for words into a (token) regex I would like to have naturally all subgroup matches of each word. words = (CHARS+) | ( (CHARS+) WS )+ (CHARS+) # words resolved to tokens words = (c+)|((c+)s)+c+ # words resolved to regex I could code until this point. Then I ran into the problem that the sub-group matches did only contain their last match. So I have the option to either create an automata for the grammar on my own (which I would like to prevent to keep the grammar expressions generic) or to somewhat make preg_match working for me somehow so I can spare that. That's basically all. Probably now it's understandable why I simplified the question. Related: pcrepattern man page Get repeated matches with preg_match_all()

    Read the article

  • Load big XML files to mySQL database (PHP)

    - by Kees
    Hello There, For a new project I need to load big XML files (200MB+) to a mySQL database. There are +- 20 feeds i need to match with that (not all fields are the same). Now when i want to catch the XML I get this error: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 171296569 bytes) in E:\UsbWebserver\Root\****\application\libraries\MY_xml.php on line 21 Is there an easy solution for this? It's not possible to get te feed in parts of a few MB's each. Thank you very much! P.s. has somebody an idea to match xml-feeds easy?

    Read the article

  • [MySQL] Optimize Query

    - by bordeux
    Hello. I have problem with optimize this query: SET @SEARCH = "dokumentalne"; SELECT SQL_NO_CACHE `AA`.`version` AS `Version` , `AA`.`contents` AS `Contents` , `AA`.`idarticle` AS `AdressInSQL` , `AA` .`topic` AS `Topic` , MATCH (`AA`.`topic` , `AA`.`contents`) AGAINST (@SEARCH) AS `Relevance` , `IA`.`url` AS `URL` FROM `xv_article` AS `AA` INNER JOIN `xv_articleindex` AS `IA` ON ( `AA`.`idarticle` = `IA`.`adressinsql` ) INNER JOIN ( SELECT `idarticle` , MAX( `version` ) AS `version` FROM `xv_article` WHERE MATCH (`topic` , `contents`) AGAINST (@SEARCH) GROUP BY `idarticle` ) AS `MG` ON ( `AA`.`idarticle` = `MG`.`idarticle` ) WHERE `IA`.`accepted` = "yes" AND `AA`.`version` = `MG`.`version` ORDER BY `Relevance` DESC LIMIT 0 , 30 Now, this query using ^ 20 seconds. How to optimize this? EXPLAIN gives this: 1 PRIMARY AA ALL NULL NULL NULL NULL 11169 Using temporary; Using filesort 1 PRIMARY ALL NULL NULL NULL NULL 681 Using where 1 PRIMARY IA ALL accepted NULL NULL NULL 11967 Using where 2 DERIVED xv_article fulltext topic topic 0 1 Using where; Using temporary; Using filesort This is example server with my data: user: bordeux_4prog password: 4prog phpmyadmin: http://phpmyadmin.bordeux.net/ chive: http://chive.bordeux.net/

    Read the article

  • Deleting entire lines in a text file based on a partial string match with Windows PowerShell

    - by Charles
    So I have several large text files I need to sort through, and remove all occurrences of lines which contain a given keyword. So basically, if I have these lines: This is not a test This is a test Maybe a test Definitely not a test And I run the script with 'not', I need to entirely delete lines 1 and 4. I've been trying with: PS C:\Users\Admin (Get-Content "D:\Logs\co2.txt") | Foreach-Object {$_ -replace "3*Program*", ""} | Set-Content "D:\Logs\co2.txt" but it only replaces the 'Program' and not the entire line.

    Read the article

  • Reverse DNS does not match SMTP Banner

    - by Bastien974
    Hi all, I had this Warning with mxtoolbox. I know that it's not necessarily a big problem, but since we are having lots of issue with email delivery, I want to check everything. I have a Exchange server 07 + Sonicwall. My FQDN is office.mydomain.ca for send/receive connectors. When I try : telnet office.mydomain.ca 25 -- 220 MYSERVER.mydomain.local Microsoft ESMTP MAIL Service ready at Fri, 7 May 2010 10:34:36 -0400 I can change my SMTP Banner in the Sonicwall, but I don't know what to write, if there is a specific syntax or what can be the consequence if it doesn't work. Thanks for your help.

    Read the article

  • (Apache) RedirectMatch regex to match all directories except those in my list

    - by dotben
    I need to 301 redirect all requests coming in for requests to http//server.com to be redirected to http//newserver.com unless the request is for an arbitary list of directories we are maintaining on the legacy server (eg server.com/foo or server.com/bar) I'm having a hard time working out how best to set this up and the regexs. EG, I need: http//server.com/page1 redirect to http//newserver.com/page1 http//server.com/dir1/page2 redirect to http//newserver.com/dir1/page2 http//server.com/foo to load as normal http//server.com/bar/baz.html to load as normal ... because 'foo' and 'bar' are in my list of legacy dirs. I'm wondering if the way to do this is to some how catch the matches in my list and then redirect anything else as a wildcard over to the new server -- but I can't make it work. Can anyone help me with some regex and rewrites for those please? Thanks (apologies for fudging the http:// in the urls, ServerFault thinks I'm posting hyperlinks and won't otherwise let me post this)

    Read the article

  • PostgreSQL timezone does not match system timezone

    - by Martin C.
    I have several PostgreSQL 9.2 installations where the timezone used by PostgreSQL is GMT, despite the entire system being "Europe/Vienna". I double-checked that postgresql.conf does not contain timezone setting, so according to the documentation it should fallback to the system's timezone. However, # su -s /bin/bash postgres -c "psql mydb" mydb=# show timezone; TimeZone ---------- GMT (1 row) mydb=# select now(); now ------------------------------- 2013-11-12 08:14:21.697622+00 (1 row) Any hints, where the GMT timezone could come from? The system user does not have TZ set and the /etc/timezone and /etc/timeinfo seem to be configured correctly. # cat /etc/timezone Europe/Vienna # date Tue Nov 12 09:15:42 CET 2013 Any hints are appreciated, thanks in advance!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >