I'd like to extract audio stream from a FLV stream in C#. I searched in Google and I found FLVExtract, but it supports only extracting from FLV files, and not from streams.
How can I do this?
Hey,
I'm trying to create a C# application which allows me to extract just the audio from YouTube videos. I've come across sites that already do that, but I'm not sure how they actually work. What would be the best way to do this programmatically?
Thanks for any advice
http://mycloud.net/js/file.js#foo=bar
I'm trying to load a cross domain javascript file, and want to pass a variable along on the query string. I have seen the above '#' method used, but am unsure of how to extract the 'foo' value from within the file.js. Any clues how to handle this without the aid of server side help?
Thanks.
I would like to extract a pre-set zip file WITHOUT an external library to a given folder, and I would like to inform the user about the current percent of extraction (with a simple progressBar and a percent label) and the currently extracted file. Is this possible somehow?
It is important to do not use any other library.
(For updating all the labels and progressBar, I use a separate backgroundWorker)
I am trying to extract digg data for a user using this url
"http://services.digg.com/user/vamsivanka/diggs?count=25&appkey=34asd56asdf789as87df65s4fas6"
and the web response is throwing an error "The remote server returned an error: (403) Forbidden."
Please let me know.
public static XmlTextReader CreateWebRequest(string url)
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.UserAgent = ".NET Framework digg Test Client";
webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials;
webRequest.Accept = "text/xml";
HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse();
System.IO.Stream responseStream = webResponse.GetResponseStream();
XmlTextReader reader = new XmlTextReader(responseStream);
return reader;
}
I am trying to extract digg data for a user using this url
"http://services.digg.com/user/vamsivanka/diggs?count=25&appkey=34asd56asdf789as87df65s4fas6"
and the web response is throwing an error "The remote server returned an error: (403) Forbidden."
Please let me know.
public static XmlTextReader CreateWebRequest(string url)
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.UserAgent = ".NET Framework digg Test Client";
webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials;
webRequest.Accept = "text/xml";
HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse();
System.IO.Stream responseStream = webResponse.GetResponseStream();
XmlTextReader reader = new XmlTextReader(responseStream);
return reader;
}
I have a vector which contains a list co-ordinates ...x1,y1 ; x2,y2....xn,yn
I am trying to extract each individual element which is a co-ordinate and then save them to file as a nice delineated co-ord pair which can be easily read. Or what would be nice i to save them so I can plot something in excel e.t.c (as cols of x and y values).
My original vector size is 31, and was originally constructed as
vector<vector<Point> > myvector( previous vector.size() );
Thanks !
I have cricket player profiles saved in the form of .xml files in a folder. each file has these tags in it
<playerid>547</playerid>
<majorteam>England</majorteam>
<playername>Don</playername>
the playerid is same as in .xml (each file is of different size,1kb to 5kb). These are about 500 files. What i need is to extract the playername, majorteam, and playerid from all these files to a list. I will convert that list to XML later. If you know how can i do it directly to XML i will be very thankful.
My Friends,
I spent quite some time on this one... but cannot yet figure out a better way to do it. I am coding in python, by the way.
So, here is a line of text in a file I am working with, for example:
"ref|ZP_01631227.1| 3-dehydroquinate synthase [Nodularia spumigena CCY9414]..."
How can I extract the two strings "ZP_01631227.1" and "Nodularia spumigena CCY9414" from the line?
The pairs of "| |" and brackets are like markers so we know we want to get the strings in between the two...
I guess I can probably loop over all the characters in the line and do it the hard way. It just takes so much time... Wondering if there is a python library or other smart ways to do it nicely?
Thanks to all!
This deals with the (diverse) flash viewers that let you zoom in on images on websites. I’m trying to extract the large, zoomed-in image rendered by the viewer. In many cases the images seem to be dynamically called by the viewer, or are created only for the part of the image you are zooming on at that point. Ideally, the approach here would be a programmatic one that could be called on an identified flash element. Expect there is nothing universal, but interested in the top few approaches that will grab most cases.
I am not happy about the history file in binary format of the Korn shell.
I like to "collect" some of my command lines, many of them actually, and for a long time. I'm talking about years. That doesn't seem easy in Korn because the history file is not plain text so I can't edit it, and a lot of junk is piling up in it. By "junk" I mean lines that I don'twant to keep, like 'cat' or 'man'.
So I added these lines to my .profile:
fc -ln 1 9999 ~/khistory.txt
source ~/loghistory.sh ~/khistory.txt
loghistory.sh contains a handful of sed and sort commands that gets rid of a lot of the junk. But apparently it is forbidden to run fc in the .profile file. I can't login whenever I do, the shell exits right away with signal 11. So I removed that 'fc -l' line from my .profile file and added it to the loghistory.sh script, but the shell still crashes.
I also tried this line in my .profile:
strings ~/.sh_history ~/khistory.txt
source ~/loghistory.sh
That doesn't crash, but the output is printed with an additional, random character in the beginning of many lines.
I can run 'fc -l' on the command line, but that's no good. I need to automate that. But how? How can I extract my ksh history as plain text?
TIA
I have inherited a Visual Studio project that contains hundreds of files.
I would like to extract all the typedefs, structs and unions from each .h/.cpp file and put the results in a file).
Each typdef/struct/union should be on one line in the results file. This would make sorting much easier.
typdef int myType;
struct myFirstStruct { char a; int b;...};
union Part_Number_Serial_Number_Part_2_Response_Message_Type {struct{Message_Response_Head_Type Head; Part_Num_Serial_Num_Part_2_Report_Array Part_2_Report; Message_Tail_Type Tail;} Data; BYTE byData[140];}myUnion;
struct { bool c; int d;...}mySecondStruct;
My problem is, I do not know what to look for (grammar of typedef/structs/unions) using a regular expression.
I cannot believe that nobody has done this before (I googled and have not found anything on this).
Does anyone know the regular expressions for these? (Note some are commented out using // others /* */)
Or a tool to accomplish this.
Edit:
I am toying with the idea of autogenerating source code and/or dialogs for modifying messages that use the underlying typedef/struct/union. I was going to use the output to generate an XML file that could be used for this reason.
The source for these are in C/C++ and used in almost all my projects. These projects are usually NOT in C/C++. By using the XML version I would only need to update/add the typedef/struct/union only in one place and all the projects would be able to autogen the source and/or dialogs.
I need to extract the name of the direct sub directory from a full path string.
For example, say we have:
$str = "dir1/dir2/dir3/dir4/filename.ext";
$dir = "dir1/dir2";
Then the name of the sub-directory in the $str path relative to $dir would be "dir3". Note that $dir never has '/' at the ends.
So the function should be:
$subdir = getsubdir($str,$dir);
echo $subdir; // Outputs "dir3"
If $dir="dir1" then the output would be "dir2". If $dir="dir1/dir2/dir3/dir4" then the output would be "" (empty). If $dir="" then the output would be "dir1". Etc..
Currently this is what I have, and it works (as far as I've tested it). I'm just wondering if there's a simpler way since I find I'm using a lot of string functions. Maybe there's some magic regexp to do this in one line? (I'm not too good with regexp unfortunately).
function getsubdir($str,$dir) {
// Remove the filename
$str = dirname($str);
// Remove the $dir
if(!empty($dir)){
$str = str_replace($dir,"",$str);
}
// Remove the leading '/' if there is one
$si = stripos($str,"/");
if($si == 0){
$str = substr($str,1);
}
// Remove everything after the subdir (if there is anything)
$lastpart = strchr($str,"/");
$str = str_replace($lastpart,"",$str);
return $str;
}
As you can see, it's a little hacky in order to handle some odd cases (no '/' in input, empty input, etc). I hope all that made sense. Any help/suggestions are welcome.
From this below script I am able to extract all links of particular website,
But i need to know how I can generate data from extracted links especially like eMail, Phone number if its there
Please help how i will modify the existing script and get the result or if you have full sample script please provide me.
Private Sub btnGo_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGo.Click
'url must be in this format: http://www.example.com/
Dim aList As ArrayList = Spider("http://www.qatarliving.com", 1)
For Each url As String In aList
lstUrls.Items.Add(url)
Next
End Sub
Private Function Spider(ByVal url As String, ByVal depth As Integer) As ArrayList
'aReturn is used to hold the list of urls
Dim aReturn As New ArrayList
'aStart is used to hold the new urls to be checked
Dim aStart As ArrayList = GrabUrls(url)
'temp array to hold data being passed to new arrays
Dim aTemp As ArrayList
'aNew is used to hold new urls before being passed to aStart
Dim aNew As New ArrayList
'add the first batch of urls
aReturn.AddRange(aStart)
'if depth is 0 then only return 1 page
If depth < 1 Then Return aReturn
'loops through the levels of urls
For i = 1 To depth
'grabs the urls from each url in aStart
For Each tUrl As String In aStart
'grabs the urls and returns non-duplicates
aTemp = GrabUrls(tUrl, aReturn, aNew)
'add the urls to be check to aNew
aNew.AddRange(aTemp)
Next
'swap urls to aStart to be checked
aStart = aNew
'add the urls to the main list
aReturn.AddRange(aNew)
'clear the temp array
aNew = New ArrayList
Next
Return aReturn
End Function
Private Overloads Function GrabUrls(ByVal url As String) As ArrayList
'will hold the urls to be returned
Dim aReturn As New ArrayList
Try
'regex string used: thanks google
Dim strRegex As String = "<a.*?href=""(.*?)"".*?>(.*?)</a>"
'i used a webclient to get the source
'web requests might be faster
Dim wc As New WebClient
'put the source into a string
Dim strSource As String = wc.DownloadString(url)
Dim HrefRegex As New Regex(strRegex, RegexOptions.IgnoreCase Or RegexOptions.Compiled)
'parse the urls from the source
Dim HrefMatch As Match = HrefRegex.Match(strSource)
'used later to get the base domain without subdirectories or pages
Dim BaseUrl As New Uri(url)
'while there are urls
While HrefMatch.Success = True
'loop through the matches
Dim sUrl As String = HrefMatch.Groups(1).Value
'if it's a page or sub directory with no base url (domain)
If Not sUrl.Contains("http://") AndAlso Not sUrl.Contains("www") Then
'add the domain plus the page
Dim tURi As New Uri(BaseUrl, sUrl)
sUrl = tURi.ToString
End If
'if it's not already in the list then add it
If Not aReturn.Contains(sUrl) Then aReturn.Add(sUrl)
'go to the next url
HrefMatch = HrefMatch.NextMatch
End While
Catch ex As Exception
'catch ex here. I left it blank while debugging
End Try
Return aReturn
End Function
Private Overloads Function GrabUrls(ByVal url As String, ByRef aReturn As ArrayList, ByRef aNew As ArrayList) As ArrayList
'overloads function to check duplicates in aNew and aReturn
'temp url arraylist
Dim tUrls As ArrayList = GrabUrls(url)
'used to return the list
Dim tReturn As New ArrayList
'check each item to see if it exists, so not to grab the urls again
For Each item As String In tUrls
If Not aReturn.Contains(item) AndAlso Not aNew.Contains(item) Then
tReturn.Add(item)
End If
Next
Return tReturn
End Function
I have a big rar archive which is split into 700mb parts. I only have part 5 and 6 and there is a 40mb file in there that I wanna extract using Winrar. I know the whole file is stored in part 5 because when I open part 5, that file gets listed (and many other files). But I can't extract any of them, cause it asks for previous archive parts which I'm sure it really doesn't need.
Is there a way to do that?
I have a zip of a pretty large website. I FTP the zip over to the server and then unzip it and extract to the website folder but this is very slow.
Is there anyway to just extract and copy the files that are newer (compared to all files)
I've had a look around for the answer to this, but I only seem to be able to find software that does it for you. Does anybody know how to go about doing this in python?
Hi Mark,
I was looking for the way to use c# for images extraction from the pdf files and found your post regarding GhostScript.dll.
Could you please send me the c# wrapper you refering to?
Thanks,
Eugene
[email protected]
Hi Folks,
I am beginner to scripting and vigorously learning TCL for the development of embedded system.
I have to Search for the files with only .txt format in the current directory, count the number of cases of each different "Interface # nnnn" string in .txt file, where nnnn is a four or 32 digits max hexadecimal no and o/p of a table of Interface number against occurrence. I am facing implementation issues while writing a script i.e, Unable to implement the data structure like Linked List, Two Dimensional array.
I am rewriting a script using multi dimension array (Pass values into the arrays in and out of procedure) in TCL to scan through the every .txt file and search for the the string/regular expression ‘Interface # ’ to count and display the number of occurrences. If someone could help me to complete this part will be much appreciated.
Search for only .txt extension files and obtain the size of the file
Here is my piece of code for searching a .txt file in present directory
set files [glob *.txt]
if { [llength $files] > 0 } {
puts "Files:"
foreach f [lsort $files] {
puts " [file size $f] - $f"
}
} else {
puts "(no files)"
}
I reckon these are all the possible logical steps behind to complete it
i) Once searched and find the .txt file then open all .txt files in read only mode
ii) Create a array or list using the procedure (proc) Interface number to NULL and Interface count to zero 0
iii) Scan thro the .txt file and search for the string or regular expression "interface #
iv) When a match found in .txt file, check the Interface Number and increment the count for the corresponding entry. Else add new element to the Interface Number list
v) If there are no files return to the first directory
My o/p is like follows
Interface Frequency
123f 3
1232
4
I've been searching around for a few weeks now for a tool that either is fully built or a direction of something I could build for dynamically extracting data via a web interface. Basically, what I'm looking for is a way to give users a list of all available data objects from our database and then let them pick ones from the list they'd like to view and set parameters then export the results to an excel file.
Right now we're doing it purely with SQL statements but we have hundreds of objects so as you might imagine, those statements are really complex and prone to errors. It would be great if there was a tool available to do this or if someone had an idea of an easy way to organize this. Any help would be greatly appreciated.
We've looked at BI tools like QlikView and Tableau but that is probably overkill for what we're trying to do. The open-source BI tools we've looked at seemed really primitive in their functionality. The other thing we looked at was MSAS (our DB is SQL Server) but I'd prefer something that was more database-agnostic and lived on a web server instead of on the database.
So I am trying to figure out how to post on a website that uses a drop down menu which is holding the values like this (based on the page source):
<td valign="top" align="right"><span class="emphasis">Select Item Option : </span></td>
<td align="left">
<span class="notranslate">
<select name="ItemOption1">
<option value="">Select Item Option</option>
<option value="321_cba">Item Option 1</option>
<option value="123_abcd">Item Option 2</option>
...
Now there are two of these drop down menus on top of each other. I want to be able to select an item from drop down menu 1 and drop down menu 2 and then submit the page. Now based on the code it submits the information using the following code:
<td colspan="2" align="center">
<input type="submit" value="View Result" onclick="return check()">
</td>
</tr>
</table>
<input type="hidden" name="ItemOption1" value="">
<input type="hidden" name="ItemOption2" value="">
I have no idea how to select the items in the drop down menu and then submit the page and capture the information on the resulting page into a text file. Can someone please help me with this?
Extract data from specific range of cells(always the same cells) in multiple worksheet in multiple files. 1 file=1 day. I have 6 technicians each day of the week, Monday thru Friday. So, 5 files with 6 worksheets. I have entered specific info in specific cells of every work sheet. The range is constant(the same address in EVERY worksheet in every file.) So, I need a formula to extract and calculate the data in the given range and dump it into another spreadsheet. I can forward an example a file if it will help anyone to answer my question. Or more explanation if necessary is available upon request. JUST PLEASE SOMEBODY HELP ME!!!!! Thank you all in advance. Regards, Michele
layout.bin
setup.lid
_sys1.cab
_user1.cab
DATA.TAG
data1.cab
SETUP.INI
setup.ins
_INST32I.EX_
SETUP.EXE
_ISDEL.EXE
_SETUP.DLL
lang.dat
os.dat
I want to extract an InstallSHIELD's 5 install package and above is the list of files in "data1" folder. However there is no *.hdr files so I can't extact the CAB files using tools on Internet, even though the package is still able to be installed without any error. Can anybody give me a suggestion for this please?