Search Results

Search found 3450 results on 138 pages for 'extract cursors'.

Page 2/138 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How do I extract HTML content using Regex in PHP

    - by gAMBOOKa
    I know, i know... regex is not the best way to extract HTML text. But I need to extract article text from a lot of pages, I can store regexes in the database for each website. I'm not sure how XML parsers would work with multiple websites. You'd need a separate function for each website. In any case, I don't know much about regexes, so bear with me. I've got an HTML page in a format similar to this <html> <head>...</head> <body> <div class=nav>...</div><p id="someshit" /> <div class=body>....</div> <div class=footer>...</div> </body> I need to extract the contents of the body class container. I tried this. $pattern = "/<div class=\"body\">\(.*?\)<\/div>/sui" $text = $htmlPageAsIs; if (preg_match($pattern, $text, $matches)) echo "MATCHED!"; else echo "Sorry gambooka, but your text is in another castle."; What am I doing wrong? My text ends up in another castle.

    Read the article

  • How to use database adapters' cursors safely?

    - by lvictorino
    I started to use psycopg2 to connect my little python script to a PostgreSQL database few days ago. After some research I found that a lot of database connector, like psycopg, work using cursors. I know what is a cursor and how to use it. But I still wonder if it's safe to use the same cursor all along the script life. Is it safe? Or would it be preferable to use a different cursor for each query?

    Read the article

  • Cursors Be Gone!

    A short tutorial on converting cursors to more conventional loops. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • extract a specific part from a html document , php cURL , php, preg_match

    - by user331071
    Hello ! I'm trying to extract some information from a webpage using php cURL+preg_match or any other function but for some reasons it doesn't work at all . For example from this page http://www.foxtons.co.uk/search?location_ids=1001-29&property_id=712128&search_form=map&search_type=LL&submit_type=search I want to extract the title which is "4 bed house to rent, Caroline Place, Bayswater, W2", the price which is "2,300" and the description which starts at "This fantastic..." and ends at "(Circle and District Lines). " I tried to use php cURL + dom but I'm getting a lot of errors like this "htmlParseEntityRef: expecting ';' in Entity, line: 243" and no result displayed Also I tried to use preg_match or preg_match_all but doesn't work either . A very basic example would be highly appreciated ! thank you !

    Read the article

  • How to extract nested tar.gz files easily?

    - by StarCub
    Hi, I need to extract tar.gz a file. It's about 950mb. It has another 23 tar.gz files in it. Each of those 23 tar.gz files has one tar file in them. My questions is how I can easily extract all of them? Is there a commandline tool that I can use? The structure is like the following: foo.tar.gz +---bar1.tar.gz ¦ +---foobar1.tar +---bar2.tar.gz ¦ +---foobar2.tar +---bar3.tar.gz ¦ +---foobar3.tar +---bar4.tar.gz ¦ +---foobar4.tar +---bar5.tar.gz ¦ +---foobar5.tar +---bar6.tar.gz ¦ +---foobar6.tar | .......... | .......... | .......... | 23 of them Thanks in advance.

    Read the article

  • Can't disable giant cursors (from accessibility mode)

    - by jackweirdy
    I've just installed ubuntu 12.04 from a livecd. Out of curiosity, I enabled the accessibility options for people who are hard of sight. As you can guess this does the usual stuff of inverting colours, increasing text size and making the cursor larger. Having finished the installation I booted into the new system to find accessibility mode was still installed. From the lightdm login screen I disabled this which switched colours and text size back to default, however it's only the pointer cursor that has gone back to default. To put it another way, the "hand" icon that you get when hovering over a link, the cursor which appears when typing and pretty much every other cursor on the system are still large. I've looked on the Universal Access menu, but there's no option to disable large cursors. I've tried toggling accessibility on and off but to no avail.

    Read the article

  • How to extract ONLY the contents of the JDK installer

    - by Abel Morelos
    I just downloaded the Java SDK/JDK versions 5 and 6, and I just need the development tools (and some libraries) contained in the installation packages, I don't need to perform an installation and that's why I was only looking for a zip package at first (for Windows there is only an exe installation file), I only need to extract the contents of the installation packages, I think this can be done from the command line but so far I haven't found how to do this (I already considered WinRar and 7-Zip, but I really want to find how to do it without using these tools) Have you done this before and how?

    Read the article

  • C# - Extract Zip with file listing

    - by fonix232
    I would like to extract a pre-set zip file WITHOUT an external library to a given folder, and I would like to inform the user about the current percent of extraction (with a simple progressBar and a percent label) and the currently extracted file. Is this possible somehow? It is important to do not use any other library. (For updating all the labels and progressBar, I use a separate backgroundWorker)

    Read the article

  • How to extract digg data by digg api

    - by vamsivanka
    I am trying to extract digg data for a user using this url "http://services.digg.com/user/vamsivanka/diggs?count=25&appkey=34asd56asdf789as87df65s4fas6" and the web response is throwing an error "The remote server returned an error: (403) Forbidden." Please let me know. public static XmlTextReader CreateWebRequest(string url) { HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url); webRequest.UserAgent = ".NET Framework digg Test Client"; webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials; webRequest.Accept = "text/xml"; HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse(); System.IO.Stream responseStream = webResponse.GetResponseStream(); XmlTextReader reader = new XmlTextReader(responseStream); return reader; }

    Read the article

  • extract digg data by digg api

    - by vamsivanka
    I am trying to extract digg data for a user using this url "http://services.digg.com/user/vamsivanka/diggs?count=25&appkey=34asd56asdf789as87df65s4fas6" and the web response is throwing an error "The remote server returned an error: (403) Forbidden." Please let me know. public static XmlTextReader CreateWebRequest(string url) { HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url); webRequest.UserAgent = ".NET Framework digg Test Client"; webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials; webRequest.Accept = "text/xml"; HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse(); System.IO.Stream responseStream = webResponse.GetResponseStream(); XmlTextReader reader = new XmlTextReader(responseStream); return reader; }

    Read the article

  • Extract co-ordinates from a vector of co-ordinates and save to file

    - by barsil sil
    I have a vector which contains a list co-ordinates ...x1,y1 ; x2,y2....xn,yn I am trying to extract each individual element which is a co-ordinate and then save them to file as a nice delineated co-ord pair which can be easily read. Or what would be nice i to save them so I can plot something in excel e.t.c (as cols of x and y values). My original vector size is 31, and was originally constructed as vector<vector<Point> > myvector( previous vector.size() ); Thanks !

    Read the article

  • Extract some data from a lot of xml files

    - by LifeH2O
    I have cricket player profiles saved in the form of .xml files in a folder. each file has these tags in it <playerid>547</playerid> <majorteam>England</majorteam> <playername>Don</playername> the playerid is same as in .xml (each file is of different size,1kb to 5kb). These are about 500 files. What i need is to extract the playername, majorteam, and playerid from all these files to a list. I will convert that list to XML later. If you know how can i do it directly to XML i will be very thankful.

    Read the article

  • How to extract a couple marked strings from a line (python)

    - by GoJian
    My Friends, I spent quite some time on this one... but cannot yet figure out a better way to do it. I am coding in python, by the way. So, here is a line of text in a file I am working with, for example: "ref|ZP_01631227.1| 3-dehydroquinate synthase [Nodularia spumigena CCY9414]..." How can I extract the two strings "ZP_01631227.1" and "Nodularia spumigena CCY9414" from the line? The pairs of "| |" and brackets are like markers so we know we want to get the strings in between the two... I guess I can probably loop over all the characters in the line and do it the hard way. It just takes so much time... Wondering if there is a python library or other smart ways to do it nicely? Thanks to all!

    Read the article

  • How to extract images from flash viewers?

    - by RC
    This deals with the (diverse) flash viewers that let you zoom in on images on websites. I’m trying to extract the large, zoomed-in image rendered by the viewer. In many cases the images seem to be dynamically called by the viewer, or are created only for the part of the image you are zooming on at that point. Ideally, the approach here would be a programmatic one that could be called on an identified flash element. Expect there is nothing universal, but interested in the top few approaches that will grab most cases.

    Read the article

  • Extract history from Korn shell

    - by Luc
    I am not happy about the history file in binary format of the Korn shell. I like to "collect" some of my command lines, many of them actually, and for a long time. I'm talking about years. That doesn't seem easy in Korn because the history file is not plain text so I can't edit it, and a lot of junk is piling up in it. By "junk" I mean lines that I don'twant to keep, like 'cat' or 'man'. So I added these lines to my .profile: fc -ln 1 9999 ~/khistory.txt source ~/loghistory.sh ~/khistory.txt loghistory.sh contains a handful of sed and sort commands that gets rid of a lot of the junk. But apparently it is forbidden to run fc in the .profile file. I can't login whenever I do, the shell exits right away with signal 11. So I removed that 'fc -l' line from my .profile file and added it to the loghistory.sh script, but the shell still crashes. I also tried this line in my .profile: strings ~/.sh_history ~/khistory.txt source ~/loghistory.sh That doesn't crash, but the output is printed with an additional, random character in the beginning of many lines. I can run 'fc -l' on the command line, but that's no good. I need to automate that. But how? How can I extract my ksh history as plain text? TIA

    Read the article

  • How to extract ALL typedefs and structs and unions from c++ source

    - by Michael Wells
    I have inherited a Visual Studio project that contains hundreds of files. I would like to extract all the typedefs, structs and unions from each .h/.cpp file and put the results in a file). Each typdef/struct/union should be on one line in the results file. This would make sorting much easier. typdef int myType; struct myFirstStruct { char a; int b;...}; union Part_Number_Serial_Number_Part_2_Response_Message_Type {struct{Message_Response_Head_Type Head; Part_Num_Serial_Num_Part_2_Report_Array Part_2_Report; Message_Tail_Type Tail;} Data; BYTE byData[140];}myUnion; struct { bool c; int d;...}mySecondStruct; My problem is, I do not know what to look for (grammar of typedef/structs/unions) using a regular expression. I cannot believe that nobody has done this before (I googled and have not found anything on this). Does anyone know the regular expressions for these? (Note some are commented out using // others /* */) Or a tool to accomplish this. Edit: I am toying with the idea of autogenerating source code and/or dialogs for modifying messages that use the underlying typedef/struct/union. I was going to use the output to generate an XML file that could be used for this reason. The source for these are in C/C++ and used in almost all my projects. These projects are usually NOT in C/C++. By using the XML version I would only need to update/add the typedef/struct/union only in one place and all the projects would be able to autogen the source and/or dialogs.

    Read the article

  • PHP: Extract direct sub directory from path string

    - by Nebs
    I need to extract the name of the direct sub directory from a full path string. For example, say we have: $str = "dir1/dir2/dir3/dir4/filename.ext"; $dir = "dir1/dir2"; Then the name of the sub-directory in the $str path relative to $dir would be "dir3". Note that $dir never has '/' at the ends. So the function should be: $subdir = getsubdir($str,$dir); echo $subdir; // Outputs "dir3" If $dir="dir1" then the output would be "dir2". If $dir="dir1/dir2/dir3/dir4" then the output would be "" (empty). If $dir="" then the output would be "dir1". Etc.. Currently this is what I have, and it works (as far as I've tested it). I'm just wondering if there's a simpler way since I find I'm using a lot of string functions. Maybe there's some magic regexp to do this in one line? (I'm not too good with regexp unfortunately). function getsubdir($str,$dir) { // Remove the filename $str = dirname($str); // Remove the $dir if(!empty($dir)){ $str = str_replace($dir,"",$str); } // Remove the leading '/' if there is one $si = stripos($str,"/"); if($si == 0){ $str = substr($str,1); } // Remove everything after the subdir (if there is anything) $lastpart = strchr($str,"/"); $str = str_replace($lastpart,"",$str); return $str; } As you can see, it's a little hacky in order to handle some odd cases (no '/' in input, empty input, etc). I hope all that made sense. Any help/suggestions are welcome.

    Read the article

  • Data extract from website URL

    - by user2522395
    From this below script I am able to extract all links of particular website, But i need to know how I can generate data from extracted links especially like eMail, Phone number if its there Please help how i will modify the existing script and get the result or if you have full sample script please provide me. Private Sub btnGo_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGo.Click 'url must be in this format: http://www.example.com/ Dim aList As ArrayList = Spider("http://www.qatarliving.com", 1) For Each url As String In aList lstUrls.Items.Add(url) Next End Sub Private Function Spider(ByVal url As String, ByVal depth As Integer) As ArrayList 'aReturn is used to hold the list of urls Dim aReturn As New ArrayList 'aStart is used to hold the new urls to be checked Dim aStart As ArrayList = GrabUrls(url) 'temp array to hold data being passed to new arrays Dim aTemp As ArrayList 'aNew is used to hold new urls before being passed to aStart Dim aNew As New ArrayList 'add the first batch of urls aReturn.AddRange(aStart) 'if depth is 0 then only return 1 page If depth < 1 Then Return aReturn 'loops through the levels of urls For i = 1 To depth 'grabs the urls from each url in aStart For Each tUrl As String In aStart 'grabs the urls and returns non-duplicates aTemp = GrabUrls(tUrl, aReturn, aNew) 'add the urls to be check to aNew aNew.AddRange(aTemp) Next 'swap urls to aStart to be checked aStart = aNew 'add the urls to the main list aReturn.AddRange(aNew) 'clear the temp array aNew = New ArrayList Next Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String) As ArrayList 'will hold the urls to be returned Dim aReturn As New ArrayList Try 'regex string used: thanks google Dim strRegex As String = "<a.*?href=""(.*?)"".*?>(.*?)</a>" 'i used a webclient to get the source 'web requests might be faster Dim wc As New WebClient 'put the source into a string Dim strSource As String = wc.DownloadString(url) Dim HrefRegex As New Regex(strRegex, RegexOptions.IgnoreCase Or RegexOptions.Compiled) 'parse the urls from the source Dim HrefMatch As Match = HrefRegex.Match(strSource) 'used later to get the base domain without subdirectories or pages Dim BaseUrl As New Uri(url) 'while there are urls While HrefMatch.Success = True 'loop through the matches Dim sUrl As String = HrefMatch.Groups(1).Value 'if it's a page or sub directory with no base url (domain) If Not sUrl.Contains("http://") AndAlso Not sUrl.Contains("www") Then 'add the domain plus the page Dim tURi As New Uri(BaseUrl, sUrl) sUrl = tURi.ToString End If 'if it's not already in the list then add it If Not aReturn.Contains(sUrl) Then aReturn.Add(sUrl) 'go to the next url HrefMatch = HrefMatch.NextMatch End While Catch ex As Exception 'catch ex here. I left it blank while debugging End Try Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String, ByRef aReturn As ArrayList, ByRef aNew As ArrayList) As ArrayList 'overloads function to check duplicates in aNew and aReturn 'temp url arraylist Dim tUrls As ArrayList = GrabUrls(url) 'used to return the list Dim tReturn As New ArrayList 'check each item to see if it exists, so not to grab the urls again For Each item As String In tUrls If Not aReturn.Contains(item) AndAlso Not aNew.Contains(item) Then tReturn.Add(item) End If Next Return tReturn End Function

    Read the article

  • Extract files in rar archive without having some previous files.

    - by Aria
    I have a big rar archive which is split into 700mb parts. I only have part 5 and 6 and there is a 40mb file in there that I wanna extract using Winrar. I know the whole file is stored in part 5 because when I open part 5, that file gets listed (and many other files). But I can't extract any of them, cause it asks for previous archive parts which I'm sure it really doesn't need. Is there a way to do that?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >