Search Results

Search found 78796 results on 3152 pages for 'find in files'.

Page 12/3152 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • beautifulsoup: find the n-th element's sibling

    - by deostroll
    I have a complex html DOM tree of the following nature: <table> ... <tr> <td> ... </td> <td> <table> <tr> <td> <!-- inner most table --> <table> ... </table> <h2>This is hell!</h2> <td> </tr> </table> </td> </tr> </table> I have some logic to find out the inner most table. But after having found it, I need to get the next sibling element (h2). Is there anyway you can do this?

    Read the article

  • C++ find largest BST in a binary tree

    - by fonjibe
    what is your approach to have the largest BST in a binary tree? I refer to this post where a very good implementation for finding if a tree is BST or not is bool isBinarySearchTree(BinaryTree * n, int min=std::numeric_limits<int>::min(), int max=std::numeric_limits<int>::max()) { return !n || (min < n->value && n->value < max && isBinarySearchTree(n->l, min, n->value) && isBinarySearchTree(n->r, n->value, max)); } It is quite easy to implement a solution to find whether a tree contains a binary search tree. i think that the following method makes it: bool includeSomeBST(BinaryTree* n) { if(!isBinarySearchTree(n)) { if(!isBinarySearchTree(n->left)) return isBinarySearchTree(n->right); } else return true; else return true; } but what if i want the largest BST? this is my first idea, BinaryTree largestBST(BinaryTree* n) { if(isBinarySearchTree(n)) return n; if(!isBinarySearchTree(n->left)) { if(!isBinarySearchTree(n->right)) if(includeSomeBST(n->right)) return largestBST(n->right); else if(includeSomeBST(n->left)) return largestBST(n->left); else return NULL; else return n->right; } else return n->left; } but its not telling the largest actually. i struggle to make the comparison. how should it take place? thanks

    Read the article

  • Find and Replace with Notepad++

    - by Levi
    I have a document that was converted from PDF to HTML for use on a company website to be referenced and indexed for search. I'm attempting to format the converted document to meet my needs and in doing so I am attempting to clean up some of the junk that was pulled over from when it was a PDF such as page numbers, headers, and footers. luckily all of these lines that need to be removed are in blocks of 4 lines unfortunately they are not exactly the same therefore cannot be removed with a simple literal replace. The lines contain numbers which are incremental as they correlate with the pages. How can I remove the following example from my html file. Title<br> 10<br> <hr> <A name=11></a>Footer<br> I've tried many different regular expression attempts but as my skill in that area is limited I can't find the proper syntax. I'm sure i'm missing something fairly easy as it would seem all I need is a wildcard replace for the two numbers in the code and the rest is literal. any help is apprciated

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • PHP - Find a string in file then show it's line number

    - by xZero
    I have an application which needs to open the file, then find string in it, and print a line number where is string found. For example, file example.txt contains few hashes: APLF2J51 1a79a4d60de6718e8e5b326e338ae533 EEQJE2YX 66b375b08fc869632935c9e6a9c7f8da O87IGF8R c458fb5edb84c54f4dc42804622aa0c5 APLF2J51 B7TSW1ZE 1e9eea56686511e9052e6578b56ae018 EEQJE2YX affb23b07576b88d1e9fea50719fb3b7 So, I want to PHP search for "1e9eea56686511e9052e6578b56ae018" and print out its line number, in this case 4. Please note that there are will not be multiple hashes in file. I found a few codes over Internet, but none seem to work. I tried this one: <?PHP $string = "1e9eea56686511e9052e6578b56ae018"; $data = file_get_contents("example.txt"); $data = explode("\n", $data); for ($line = 0; $line < count($data); $line++) { if (strpos($data[$line], $string) >= 0) { die("String $string found at line number: $line"); } } ?> It just says that string is found at line 0.... Which is not correct.... Final application is much more complex than that... After it founds line number, it should replace string which something else, and save changes to file, then goes further processing.... Thanks in advance :)

    Read the article

  • Find next and previous link in a hierarchy

    - by rebellion
    I have a hierarchy with links nested in list element like this: <ul> <li><a href="#">Page 1</a> <ul> <li><a href="#">Page 1.1</a></li> <li><a href="#">Page 1.2</a> <ul> <li><a href="#">Page 1.2.1</a></li> <li><a href="#">Page 1.2.2</a></li> </ul> </li> <li><a href="#">Page 1.3</a></li> </ul> </li> <li><a href="#">Page 2</a> <ul> <li><a href="#">Page 2.1</a></li> <li><a href="#">Page 2.2</a></li> </ul> </li> <li><a href="#">Page 3</a> <ul> <li><a href="#">Page 3.1</a> <ul> <li><a href="#">Page 3.1.1</a></li> <li><a href="#">Page 3.1.2</a></li> </ul> <li><a href="#">Page 3.2</a></li> <li><a href="#">Page 3.3</a></li> <ul> <li><a href="#">Page 3.1.1</a></li> <li><a href="#">Page 3.1.2</a></li> </ul> </li> </ul> </li> </ul> Basically just a sitemap. But I want to make next and previous links with jQuery, which finds the active page you're on (probably by checking for a class), and finding the previous and next anchor element (taking no regards of the hierarchy). I've tried with next(), previous() and find() but can't seem to get it to work. What is the easiest way to get the anchor elements before and after the current one?

    Read the article

  • C# RegEx - find html tags (div and anchor)

    - by czesio
    Hi I have to retrieve several div section (of specific class name "row ") with it's content, and additionally find all anchor tags (link urls) (with class "underline red bold"). Shortly speaing : get section of: ... (divs, tags ...) and collections of urls string[] urls = {"/searchClickThru? pid=prod56534895&q=&rpos=109181&rpp=10&_dyncharset=UTF-8&sort=&url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"} the entire page looks like that: <html> ... a lot of stuff <div class="row "> <div class="photo"> <a rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> <img alt="alt msg" src="/b/s/b9/03/b9038292d147a582add07ee1f0607827.jpg"> </a> </div> <div class="desc"> <div class="l1"> <div class="icons"> </div> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td> <div class="fleft"> <a class="underline red bold" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> Culture And Gender <br>Intimate Relation</a> </div> <div class="fleft"> </div> </td> </tr> </tbody> </table> </div> <div class="l2"> <div> </div> <div> <div class="but"> </div> </div> </div> <div class="l3"> Long description <a class="underlinepix_red no_wrap" rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> more<img alt="" src="/b/img/arr_red_sm.gif"> </a> </div> </div> </div> <div class="omit"></div> <div class="row "> <div class="photo"> <a rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534899,p"> <img alt="alt msg" src="/b/s/b9/03/b9038292d147a582add07ee1f06078222.jpg"> </a> </div> <div class="desc"> <div class="l1"> <div class="icons"> </div> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td> <div class="fleft"> <a class="underline red bold" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod5653489225,p"> Culture And Gender <br>Intimate Relation</a> </div> <div class="fleft"> </div> </td> </tr> </tbody> </table> </div> <div class="l2"> <div> </div> <div> <div class="but"> </div> </div> </div> <div class="l3"> Long description <a class="underlinepix_red no_wrap" rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> more<img alt="" src="/b/img/arr_red_sm.gif"> </a> </div> </div> </div> Can anybody help me to create suitable reg ex?

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • Header file-name as argument

    - by Alphaneo
    Objective: I have a list of header files (about 50 of them), And each header-file has few arrays with constant elements. I need to write a program to count the elements of the array. And create some other form of output (which will be used by the hardware group). My solution: I included all the 50 odd files and wrote an application. And then I dumped all the elements of the array into the specified format. My environment: Visual Studio V6, Windows XP My problem: Each time there is a new set of Header files, I am now changing the VC++ project settings to point to the new set of header files, and then rebuild. My question: A bit in-sane though, Is there any way to mention the header from some command line arguments or something? I just want to avoid re-compiling the source every time...

    Read the article

  • How do I rename my old Program Files folder?

    - by SteveJ
    I installed a new SSD as my boot drive (C:), installed a fresh version of Windows 7 64-bit, and kept my existing SATA drive in the system (D:). I want to keep using my D: drive for file storage (no sense filling up the SSD with stuff that isn't performance critical) and I haven't formatted the D: drive because there's stuff on there I want to keep. I also want to create a new "D:\Program Files" folder so I can install apps that aren't performance-critical there. So I decided I'd rename the existing "D:\Program Files" from my old Windows install to "D:\Old Program Files" and then create a new "D:\Program Files" directory. Easy, right? I can see "D:\Program Files" just fine in Explorer. I right click, select Rename, and type "Old Program Files." I get the alert that says I need Admin permission to do this, so I press the confirm button with the shield. But the folder still appears as "Program Files" in Explorer. I jump out to the command line, and it appears as "Old Program Files" when I do a dir. I can even do mkdir "Program Files" and when I do a dir they both appear. But in the Explorer GUI, it looks like I have two "Program Files" folders. This will be confusing during app installation because I won't be able to tell which one is which. I've tried poking around in the properties tab of the old folder, but can't find anything that would explain what's causing the issue. How do I rename the old Program Files folder?

    Read the article

  • How long do you keep log files?

    - by Alex
    I have an application which writes its log files in a special folder. Now I'd like to add a functionality to delete these logs after a defined period of time automatically. But how long should I keep the log files? What are "good" default values (7 or 180 days)? Or do you prefer other criteria (e.g. max. used disk space)?

    Read the article

  • Encrypt uploaded pdf files with mcrypt and php

    - by microchasm
    I'm currently set up with a CentOS box that utilizes mcrypt to encrypt/decrypt data to/from the database. In my haste, I forgot that I also need a solution to encrypt files (primarily pdf, with a xls and txt file here and there). Is there a way to utilize mcrypt to encrypt uploaded pdf files? I understand the possibility of file_get_contents() with txt; is a similar solution available for other formats? Thanks!

    Read the article

  • iphone app too read text files.

    - by bandito40
    Hi, Need to edit some of the local text files on my iphone but so far all the apps I have downloaded do not navigate the OS3 file tree for me to load and edit them. I need to do this on my iphone as I can no longer access via ssh or with the iphone cable. One of the files to edit is a ssh config file which is what is not allowing ssh connections. Any ideas on apps or other methods that I could use.

    Read the article

  • Encrypt pdf files with mcrypt and php

    - by microchasm
    I'm currently set up with a CentOS box that utilizes mcrypt to encrypt/decrypt data to/from the database. In my haste, I forgot that I also need a solution to encrypt files (primarily pdf, with a xls and txt file here and there). Is there a way to utilize mcrypt to encrypt pdf files? I understand the possibility of file_get_contents() with txt; is a similar solution available for other formats? Thanks!

    Read the article

  • When did my LaTeX files become TeX files!?

    - by andrz_001
    After transferring all my files onto a new machine, all files that were once LaTeX files (having the texnic center "T" icon) are now TeX files (having the TeX works icon --I don't remember installing that one!). But....the files are associated with the technic center! In other words, the files open with the texnic center, yet the "type of file" is "tex document". I'm using the same miktex distrubution and texnicCenter (both for windows xp) as on the old machine. Again, I don't know how/where I got this texworks on the new machine--assuming it came with windows. A question: Could these tex associations cause me any new surprises, anything unexpected? Like certain packages not working, etc? Because I can't have that!! Should I not fix it if it ain't broken!? For instance, one particular file yesterday would not produce any pdf output after compiling...after trying many things (other files compiled as they should have), I got to thinking to try it without the setspace for doublespacing!! And it worked. I have no idea why. Anyway.... Real question: How to revert to LaTeX file associations?? Rhetorical question: Would this question have better luck in ctan or stackoverflow? I guess I'll find out! Thank you wholeheartedly!

    Read the article

  • What program can I use to open .mld (recorded webcam) files?

    - by mike
    I am looking for a program to open video files with the extension .mld (This is a file from a video recording software I had a long time ago). Does anybody know any programs in Ubuntu that can open such files? Zoals de titel luidt: ik ben op zoek naar een programma dat videobestanden met de extensie .mld kan openen. Dit is een bestand van een wecbamrecorder die ik vroeger had op Windows. Alvast heel erg bedankt.

    Read the article

  • On linux how can make a list of files that are owned by a particular owner and then fix the group and owner?

    - by Stuart Woodward
    I have a deep and complex file system where some files have been accidently written by root. I want to change the ownership of those files back to the original owner in one go. I am playing with commands like: find /folder -type f | xargs ls -l | grep "root root" but there is a lot of garbage coming out too. I want to make a list first and then change only the files in that list after confirmation.

    Read the article

  • I would like to pipe output of find into input list of scp, how?

    - by user13184
    I'm a novice linux user and I am trying to send a long list of files from one computer to another. The argument list is too long, so I am using find. I am having trouble setting up the expression, though. Can someone help? Here is what I would normally type for a short argument list. scp ./* phogan@computer/directory... Here's I think this might translate into with find. scp find . -name "*" phogan@computer/directory... Maybe I could use piping? Any suggestions would help. Thanks in advance.

    Read the article

  • PHP Help with "if" statement to dynamically include files

    - by Adrian M.
    Hello, I have these files: "id_1_1.php", "id_1_2.php", "id_1_3.php" etc "id_2_1.php", "id_2_2.php", "id_2_3.php" etc the number of files is not known because will always grow.. all the files are in same directory.. I want to make a if statement: to include the files only if their name ends with "_1" another function to load all the files that start with "id_1" How can I do this? Thank you!

    Read the article

  • Recursive follow files in bash

    - by user328955
    I have files which contain file names pointing to other files. These files contain further file names pointing further files and so on. I need a bash script which follows each files recursively and logs into file every touched file during the run. file1: file2 file3 file2: file4 file3: file5 file4 and file5 are empty. Result: file1 file2 file4 file3 file5

    Read the article

  • Folder Redirection/Offline Files on Win 7 | Folders are empty when not connected to the domain

    - by Matt
    I've been struggling with this issue for days and cannot seem to find anyone else with a similar issue. I will note first that I have tried using both roaming profiles and the group policy setting for force local profiles.... now onto the problem. What I am trying to do is have my teachers accounts log onto their laptops using their domain credentials. Once logged in their desktop and documents are redirected to a network share //server/redirects/documents/. This works fine when the computer is connected to the domain network. Offline File Sync works great and caches the files locally. However this all breaks down when the user logs in when the computer is no longer connected to the domain network. When the user logs in the desktop and documents are empty. What I find very odd is if I manually go to the offline file folder all of the files are there, The group policy folder redirection does not execute to the offline folder. Is this by Design? (It does not work like this on Vista, I have the exact same group policy settings set on vista machines and it works flawlessly). Additional Info When I look at the event log there is no folder redirection events at all when user logs in and is not connected to the network. In addition a new profile is create in c:/users/username.domain.00x. Every log in creates an additional profile. There is a event that states that a registry files were still in use. Any help would be appreciated.

    Read the article

  • Does apt-cacher Change Packages `Access Time`?

    - by tAmir Naghizadeh
    I tried to remove the long time unused packages from apt-cacher archive using find: 1. $find /var/cache/apt-cacher -atime +5 -type f -name ".*deb*" | wc -l 8471 2. $find /var/cache/apt-cacher -atime +9 -type f -name ".*deb*" | wc -l 2269 3. $find /var/cache/apt-cacher -atime +10 -type f -name ".*deb*" | wc -l 0 Can I depend on the Access Time for apt-cacher archive usage? That is, does Access Time change only when package get received by the user? (we are apt-cacher for more than 6 months).

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >