Search Results

Search found 3241 results on 130 pages for 'extract'.

Page 86/130 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Panel data with binary dependent variable in R

    - by Abiel
    Is it possible to do regressions in R using a panel data set with a binary dependent variable? I am familiar with using glm for logit and probit and plm for panel data, but am not sure how to combine the two. Are there any existing code examples? Thank you. EDIT It would also be helpful if I could figure out how to extract the matrix that plm() is using when it does a regression. For instance, you could use plm to do fixed effects, or you could create a matrix with the appropriate dummy variables and then run that through glm(). In a case like this, however, it is annoying to generate the dummies yourself and it would be easier to have plm do it for you. Abiel

    Read the article

  • Help! Obj-C/Iphone programming: extracting string from html text and reading off line by line

    - by royden
    hihi, I have this html text response from a particular website: <tr><td valign="top"><img src="/icons/image2.gif" alt="[IMG]"></td><td><a href="crsdsdfs2221.jpg">crash-2221.jpg</a></td><td align="right">14-Jun-2010 14:29 Notice for every line, there is this href=".__", which is an image file with random name and random format. I would like to extract that string within the inverted commas out so that i can append it into a URL path and download the image. I've been looking through this documentation from apple: http://developer.apple.com/mac/library/documentation/cocoa/conceptual/strings/Articles/SearchingStrings.html#//apple_ref/doc/uid/20000149-CJBBGBAI on String programming but couldn't find one that fits my bill. Also after reading it, what code can I use to ensure that I will be reading the next line the next time my function is called( because I want to download the next picture). Hope some kind soul can help me out, thanks!

    Read the article

  • Are SqlCipher open cursors a security concern?

    - by user1178479
    I'm using SqlCipher with content providers. Right now, when I want to lock the app I just clear out the cached password. However, the app can continue to work with any open cursors. This means that re-opening the app grants access to the sensitive data. I fix this issue on the surface by redirecting to a login screen if the app doesn't have passwords. However, I'm concerned if there are any security issues with these open cursors or if I should just continue to block UI access and not worry? SqlCipher's docs say that it reads/writes encrypted pages on the fly, as opposed to decrypting the entire DB, this makes me think that open cursors are still secure. The main concern here is that someone loses their phone and then a knowledgeable individual can use these open cursors to extract sensitive data.

    Read the article

  • Select *, max(date) works in phpMyAdmin but not in my code

    - by kdobrev
    OK, my statement executes well in phpMyAdmin, but not how I expect it in my php page. This is my statement: SELECT egid , group_name , limit , MAX( date ) FROM employee_groups GROUP BY egid ORDER BY egid DESC ; This is may table: CREATE TABLE employee_groups ( egid int(10) unsigned NOT NULL, date date NOT NULL, group_name varchar(50) NOT NULL, limit smallint(5) unsigned NOT NULL, PRIMARY KEY (egid,date) ) ENGINE=MyISAM DEFAULT CHARSET=cp1251; I want to extract the most recent list of groups, e.g. if a group has been changed I want to have only the last change. And I need it as a list (all groups).

    Read the article

  • How to load image form NSMutableArray

    - by pbcoder
    I want to load the image URL from my NSMutableArray. Here is my Code: id path = (NSString *)[[stories objectAtIndex: storyIndex] objectForKey: @"icon"]; NSURL *url = [NSURL URLWithString:path]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *img = [[UIImage alloc] initWithData:data cache:NO]; If I use: id path = @"http://www.xzy.de/icon.png"; it´s all right, but not if I want to extract the imageURL from my Array Anyone who can help me? Thanks!

    Read the article

  • get file path from svn diff file with PHP and C

    - by coderex
    hi i have a file having svn diff i wish to extract the filenames form the diff. How to write the parser for that.. Index: libs/constant.php =================================================================== --- libs/constant.php (revision 1243) +++ libs/constant.php (revision 1244) @@ -26,5 +26,5 @@ // changesss - +// test 2 ?> \ No newline at end of file Index: libs/Tools.php =================================================================== --- libs/Tools.php (revision 1243) +++ libs/Tools.php (revision 1244) @@ -34,5 +34,5 @@ // another file an change - +// test ?> \ No newline at end of file Sample output. libs/constant.php libs/Tools.php How to write parser in PHP and C.

    Read the article

  • Doxygen C++ comment string parser in python?

    - by Sebastian
    Does anybody know of a python module to parse a doxygen style C++ comment string? I mean a string like this (simple example): /** * A constructor. * A more elaborate description of the constructor. * @param param1 test1 * @param param2 test2 */ and I would like to extract the brief, the long description, the parameters, the return value etc. I'm currently doing this using string methods and regular expressions but my solution is not very robust. Alternatively can anybody recommend an easy to use python parser lib that I can set up quickly? Thanks in advance

    Read the article

  • Java URL Connection Time Out

    - by webren
    Hello, I am attempting to connect to a website where I'd like to extract its HTML contents. My application will never connect to the site - only time out. Here is my code: URL url = new URL("www.website.com"); URLConnection connection = url.openConnection(); connection.setConnectTimeout(2000); connection.setReadTimeOut(2000); BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream()); String line; while ((line = reader.readLine()) != null) { // do stuff with line } reader.close(); Any ideas would be greatly appreciated. Thanks!

    Read the article

  • process incoming mail and parse out original text

    - by florin
    I have inherited a rails forum (Rails 2.3.2 I think) that alerts people of new posts/replies for the forums or threads they are watching. To make it easier for people to answer to threads I would like to enable reply-to-post, similar to basecamp and a bunch of other forums and tools out there. I would add a separator text (like "----add your reply above this line-----") in the original email. I need to: - process incoming email - extract the new text (above the separator line) - ideally strip out text like "on ... [email protected] wrote:" that is automatically added by some mail clients - identify the thread this email is referring to (either using the incoming address or the subject line) - identify the sender - post the content as new reply Any suggestions on how to get started? Any good plugins for this? I've seen many mentioning Mailman and Fetcher, are there any other and which one is the best for this little feature? Thanks!

    Read the article

  • trying to prune down a list of files

    - by romunov
    I have a list of files and I'm trying to extract all layer1_*.grd files. Is there a way of doing this in one grep expression? lof <- c("layer1_1.grd", "layer1_1.gri", "layer1_2.grd", "layer1_2.gri", "layer1_3.grd", "layer1_3.gri", "layer1_4.grd", "layer1_4.gri", "layer1_5.grd", "layer1_5.gri", "layer2_1.grd", "layer2_1.gri", "layer2_2.grd", "layer2_2.gri", "layer2_3.grd", "layer2_3.gri", "layer2_4.grd", "layer2_4.gri", "layer2_5.grd", "layer2_5.gri", "layer3_1.grd", "layer3_1.gri", "layer3_2.grd", "layer3_2.gri", "layer3_3.grd", "layer3_3.gri", "layer3_4.grd", "layer3_4.gri", "layer3_5.grd", "layer3_5.gri", "layer4_1.grd", "layer4_1.gri", "layer4_2.grd", "layer4_2.gri", "layer4_3.grd", "layer4_3.gri", "layer4_4.grd", "layer4_4.gri", "layer4_5.grd", "layer4_5.gri" I tried doing this in two steps: list.of.files <- list.files(pattern = c("1_")) list.of.files <- list.of.files[grep(".grd", list.of.files)] Can someone enlighten me how to do this with grep in one step? I naively tried passing list() and c() to the grep but, as you can imagine, it doesn't work. list.of.files <- list.files() list.of.files <- list.of.files[grep(list("1_", ".grd"), list.of.files)]

    Read the article

  • How to read data between two html tags as text.

    - by vijay.shad
    Hi, I am working on a project which needs to extract text form a predefined div tag. My requirement is to send the content in the target div in a email body. I have to use javascript or php for this task. The Process : When a given link will be clicked; a javascript function will trigger and read that target div. The content of the div will be then submitted to server in dynamic form. What options I have to get this task done? Thanks.

    Read the article

  • Validate XML instance document against WSDL

    - by Ice09
    Hi, I can easily validate a XML document against a XML Schema, eg. with XMLSpy or programmatically. Is it possible to do this with a WSDL file? It does not seem possible with XMLSpy or any other XML tool I know. For me the only possibility right now is to do it programmatically, eg. by generating Java code from the WSDL and starting a request, which is then marshalled correctly. If there is no tool / easy programmatic approach, is there a tool which can extract XML Schema from the WSDL? Best

    Read the article

  • How can I obfuscate my Perl script to make it difficult to reverse engineer?

    - by codaddict
    I've developed a Perl script that the a confidential business logic. I have to give this script to another Perl coder to test it in his environment. He will definitely try to extract the logic in my program. So I want to make my script impossible, or at least very very hard, to understand. I've tried a few sites like liraz, but they did not work for me. The encoded Perl script does not work the same as the original one.

    Read the article

  • How to check with PHP does a SQL database already have

    - by Dan Horvat
    I've tried to find the answer to this question but none of the answers fit. I have two databases, one has 15.000.000 entries and I want to extract the necessary data and store it in a much smaller database with around 33.000 entries. Both databases are open at the same time. Or at least they should be. While having the big database open and extracting the entries from it, is it possible to check whether the value already exists in the smaller database? I just need some generic way which checks that.

    Read the article

  • How to find specific row in MySQL query result?

    - by Šime Vidas
    So I do this to retrieve my entire table: $result = mysql_query( 'SELECT * FROM mytable' ); Then, in another part of my PHP-page, I do another query (for a specific row): $result2 = mysql_query( 'SELECT * FROM mytable WHERE id = ' . $id ); $row = mysql_fetch_array( $result2 ); So, I'm performing two querys. However, I don't really have to do that, do I? I mean, the row that I'm retrieving in my second query already is present in $result (the result of my first query), since it contains my entire table. Therefore, instead of doing the second query, I would like to extract the desired row from $result directly (while keeping $result itself in tact). How would I do that? OK, so this is how I've implemented it: function getRowById ( $result, $id ) { while ( $row = mysql_fetch_array( $result ) ) { if ( $row['id'] == $id ) { mysql_data_seek( $result, 0 ); return $row; } } }

    Read the article

  • Remote stream multiple files in SOLR

    - by Mark
    I want to use SOLR's remote-streaming facility to extract and index the content of files. This works fine if I pass stream.file=xxx as a parameter to the http GET method. However, I have a lot of these, and want to batch them up (i.e. not have to have a GET per file). Is there a way I can do this in SOLR? e.g. I'd like to be able to POST some xml like this: <add> <doc stream_file="filename"> <field name="id">123</field> </doc> <doc>...

    Read the article

  • MySQL Join issue

    - by mouthpiec
    Hi, I have the following tables: --table sportactivity-- sport_activity_id, home_team_fk, away_team_fk, competition_id_fk, date, time (tuple example) - 1, 33, 41, 5, 2010-04-14, 05:40:00 --table teams-- team_id, team_name (tuple example) - 1, Algeria Now I have the following SQL statment that I use to extract Team A vs Team B SELECT sport_activity_id, T1.team_name AS TeamA, T2.team_name AS TeamB, DATE_FORMAT( DATE, '%d/%m/%Y' ) AS DATE, DATE_FORMAT( TIME, '%H:%i' ) AS TIME FROM sportactivity JOIN teams T1 ON home_team_fk = T1.team_id JOIN teams T2 ON ( away_team_fk = T2.team_id OR away_team_fk = '0' ) WHERE DATE( DATE ) >= CURDATE( ) ORDER BY DATE( DATE ) My problem is that when team B is empty, I am having irrelevant information .... it seems that it is returning all the combinations. I need a query that when team B is equal to 0, (this can occur in my scenario) I get only Team A - Team B (as 0) once.

    Read the article

  • Django i18n: makemessages only on site level possible?

    - by AndiDog
    I have several strings in my site that don't belong to any app, for example {% block title %}{% trans "Login" %}{% endblock %} or a modified authentication form used to set the locale cookie class AuthenticationFormWithLocaleOption(AuthenticationForm): locale = forms.ChoiceField(choices = settings.LANGUAGES, required = False, initial = preselectedLocale, label = _("Locale/language")) Now when I execute django-admin.py makemessages --all -e .html,.template in the site directory, it extracts the strings from all Python, .html and .template files, including those in my apps. That is because I develop my apps inside that directory: Directory structure: sitename myapp1 myapp2 Is there any way to extract all strings that are not in my apps? The only solution I found is to move the app directories outside the site directory structure, but I'm using bzr-externals (similar to git submodules or svn externals) so that doesn't make sense in my case. Moving stuff that needs translation into a new app is also possible but I don't know if that is the only reasonable solution.

    Read the article

  • identify documents from results of mahout clustering

    - by Tejas
    I am using mahout to cluster text documents indexed using solr. I have used the "text" field in the document to form vectors. Then I used the k-means driver in mahout for clustering and then the clusterdumper utility to dump the results. I am having difficulty in understanding the output results from the dumper. I could see the clusters formed with term vectors in those clusters. But how do I extract the documents from these clusters. I want the result to be the input documents appearing in different clusters.

    Read the article

  • Parsing plain data with Javascript (JQuery)

    - by Angelus
    Well , I have this text in a Javascript Var: GIMP Palette Name: Named Colors Columns: 16 # 255 250 250 snow (255 250 250) 248 248 255 ghost white (248 248 255) 245 245 245 white smoke (245 245 245) 220 220 220 gainsboro (220 220 220) 255 250 240 floral white (255 250 240) 253 245 230 old lace (253 245 230) 250 240 230 linen (250 240 230) 250 235 215 antique white (250 235 215) 255 239 213 papaya whip (255 239 213) And What I need is to cut it in lines and put them in one Array , after that i must separate each number and the rest in an string. I'm getting crazy searching functions to do that but now i can't see anyone in Javascript. modified End expected format will be first the next: array[0]='255 250 250 snow (255 250 250)' Then i wanna take it and extract each line into some vars to use them: colour[0]=255; colour[1]=250; colour[2]=250; string=snow (255 250 250); (the vars will be reused with each line)

    Read the article

  • How to do this query?

    - by Damiano
    Hello everybody! I have a mysql table with these columns: ID (auto-increment) ID_BOOK (int) PRICE (double) DATA (date) I know two ID_BOOK values, example, 1 and 2. QUERY: I have to extract all the PRICE (of the ID_BOOK=1 and ID_BOOK=2) where DATA is the same! Table example: 1 1 10.00 2010-05-16 2 1 11.00 2010-05-15 3 1 12.00 2010-05-14 4 2 18.00 2010-05-16 5 2 11.50 2010-05-15 Result example: 1 1 10.00 2010-05-16 4 2 18.00 2010-05-16 2 1 11.00 2010-05-15 5 2 11.50 2010-05-15 ID_BOOK=2 hasn't 2010-05-14 so i jump it. Thank you so much!

    Read the article

  • How to grep lines having specific format.

    - by Nitin
    I have got a file with following format. 1234, 'US', 'IN',...... 324, 'US', 'IN',...... ... ... 53434, 'UK', 'XX', .... ... ... 253, 'IN', 'UP',.... 253, 'IN', 'MH',.... Here I want to extract only those lines having 'IN' as 2nd keyword. i.e. 253, 'IN', 'UP',.... 253, 'IN', 'MH',.... Can any one please tell me a command to grep it.

    Read the article

  • Creating a virtual, data-driven section of a Wordpress-powered site

    - by lgomez
    Hello all, I want to create a plugin for wordpress to automatically serve pages containing data pulled from a provider's API. The API returns one or more records containing data for that record and I simply want to have the plugin intercept the request, call the API with parameters pulled from the request URI and display the data using a template that I can either let them upload to the server or let them copy and paste into the plugins admin settings. For example, I may want one of my wordpress installations to show products pulled from such an API under the url "example.com/products". The plugin would catch that request, extract the variables from the URL, call the API and render the template with the returned results. I'd like to avoid requiring editing the .htaccess file like some caching plugins do. Some of the admins of these pages won't know how to do that or simply won't have access to the .htaccess file. Thanks!

    Read the article

  • What's causing "Unable to retrieve native address from ByteBuffer object"?

    - by r0u1i
    As a very novice Java programmer, I probably should not mess with that kind of things. Unfortunately, I'm using a library which have a method that accepts a ByteBuffer object and throws when I try to use it: Exception in thread "main" java.lang.NullPointerException: Unable to retrieve native address from ByteBuffer object Is it because I'm not using a non-direct buffer? edit: There's not a lot of my code there. The library I'm using is jNetPcap, and I'm trying to dump a packet to file. My code takes an existing packet, and extract a ByteBuffer out of it: byte[] bytes = m_packet.getByteArray(0, m_packet.size()); ByteBuffer buffer = ByteBuffer.wrap(bytes); Then it calls on of the dump methods of jNetPcap that takes a ByteBuffer.

    Read the article

  • Storing API keys in Android, is obfustication enough?

    - by fredley
    I'm using the Dropbox API. In the sample app, it includes these lines: // Replace this with your consumer key and secret assigned by Dropbox. // Note that this is a really insecure way to do this, and you shouldn't // ship code which contains your key & secret in such an obvious way. // Obfuscation is good. final static private String CONSUMER_KEY = "PUT_YOUR_CONSUMER_KEY_HERE"; final static private String CONSUMER_SECRET = "PUT_YOUR_CONSUMER_SECRET_HERE"; I'm well aware of the mantra 'Secrecy is not Security', and obfuscation really only slightly increases the amount of effort required to extract the keys. I disagree with their statement 'Obfustication is good'. What should I do to protect the keys then? Is obfustication good enough, or should I consider something more elaborate?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >