Search Results

Search found 64 results on 3 pages for 'preprocess'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • ASSIMP in my program is much slower to import than ASSIMP view program

    - by Marco
    The problem is really simple: if I try to load with the function aiImportFileExWithProperties a big model in my software (around 200.000 vertices), it takes more than one minute. If I try to load the very same model with ASSIMP view, it takes 2 seconds. For this comparison, both my software and Assimp view are using the dll version of the library at 64 bit, compiled by myself (Assimp64.dll). This is the relevant piece of code in my software // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; cout << "Loading " << pFile << "... "; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,80.f); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE, aiPrimitiveType_LINE | aiPrimitiveType_POINT); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file scene = (aiScene*)aiImportFileExWithProperties(pFile.c_str(), ppsteps | /* default pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges //aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); if(!scene){ cout << aiGetErrorString() << endl; return 0; } this is the relevant piece of code in assimp view code // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,g_smoothAngle); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE,nopointslines ? aiPrimitiveType_LINE | aiPrimitiveType_POINT : 0 ); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file g_pcAsset->pcScene = (aiScene*)aiImportFileExWithProperties(g_szFileName, ppsteps | /* configurable pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); As you can see the code is nearly identical because I copied from assimp view. What could be the reason for such a difference in performance? The two software are using the same dll Assimp64.dll (compiled in my computer with vc++ 2010 express) and the same function aiImportFileExWithProperties to load the model, so I assume that the actual code employed is the same. How is it possible that the function aiImportFileExWithProperties is 100 times slower when called by my sotware than when called by assimp view? What am I missing? I am not good with dll, dynamic and static libraries so I might be missing something obvious. ------------------------------ UPDATE I found out the reason why the code is going slower. Basically I was running my software with "Start debugging" in VC++ 2010 Express. If I run the code outside VC++ 2010 I get same performance of assimp view. However now I have a new question. Why does the dll perform slower in VC++ debugging? I compiled it in release mode without debugging information. Is there any way to have the dll go fast in debugmode i.e. not debugging the dll? Because I am interested in debugging only my own code, not the dll that I assume is already working fine. I do not want to wait 2 minutes every time I want to load my software to debug. Does this request make sense?

    Read the article

  • img captions based on src value match

    - by Basho
    I am trying t o create img captions based on src value match. JQUERY : What is the best way to extract "Author-ABC" from an img with src value wwww.abcd.com/images/imagename_Author-ABC_.jpg and replace the alt value with this value. DRUPAL : Is there a way to preprocess this a drupal template function and save the value in img alt attribute? Ideas? Basho

    Read the article

  • Why are they putting "processors" on hard drives?

    - by Celeritas
    What does it mean when they have a processor on the hard drive, how does it work, and what benfit does it have? I don't understand - the CPU is the processor and the hard drive transfers it's contents to RAM. Do have additional processors, preprocess the data some how? Here's some examples Western Digital WD Black WD1002FAEX 1TB "Dual processor speed" NETGEAR ReadyNAS 312 2-Bay Diskless Network Attached Storage "Dual-core Intel 2.1GHz processor and 2GB on-board memory" and routers now have processors too, why's that nescecary? I guess it sort of makes sense - some logic needs to happen for the packets to be read in to know which ports to send them out on, but why did old routers not need them? Example or wireless router with processor: "Dual-core processor"

    Read the article

  • Convert png sequence to x264 with ffmpeg

    - by Thucydides411
    I am trying to convert a series of pngs into an mp4 video. I am using ffmpeg, and want to encode the video with the x264 codec. Using the command ffmpeg -y -r 30 -b 1800k -i _tmp%04d.png -vcodec libx264 out.mp4 I get the following warning message Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' My understanding is that there is an alpha channel in the pngs, which the x264 encoder cannot handle. Is there a way to get around this problem? Is there, for example, a way to get the encoder to ignore the alpha channel (my pngs don't actually have any transparent elements)? I'm aware that I could batch convert the pngs beforehand to strip the alpha channel, but the sequence of images is produced by another program, and having to preprocess the images each time I make a video would be less than optimal. Edit: After stripping the alpha channel from each frame using the command convert in.png -background white -flatten +matte out.png ffmpeg gives the warning message Incompatible pixel format 'pal8' for codec 'libx264', auto-selecting format 'yuv420p' so still no dice.

    Read the article

  • Neural network input preprocessing

    - by TND
    It's clear that the effectiveness of a neural network depends strongly on the format you give it to work with. You want to preprocess it into the most convenient form you can algorithmically get to, so that the neural network doesn't have to account for that itself. I'm working on a little project that (surprise!) is going to be using neural networks. My future goal is to eventually use NEAT, which I'm really excited about. Anyway, one of my ideas involves moving entities in continuous 2D space, from a top-down perspective (this would be a really cool game AI). Of course, unless these guys are blind, they're going to be able to see the world around them. There's a lot of different ways this information could be fed into the network. One interesting but expensive way is to simply render a top-down "view" of things, with the entities as dots on the picture, and feed that in. I was hoping for something much simpler to use (at least at first), such as a list of the x (maybe 7 or so) nearest entities and their position in relative polar coordinates, orientation, health, etc., but I'm trying to think of the best way to do it. My first instinct was to order them by distance, which would inherently also train the neural network to consider those more "important". However, I was thinking- what if there's two entities that are nearly the same distance away? They could easily alternate indexes in that list, confusing the network. My question is, is there a better way of representing this? Essentially, the issue is the network needs a good way of keeping track of who's who, while knowing (by being inputted) relevant information about the list of entities it can see. Thanks!

    Read the article

  • Add login / logout to drupal menu that's not a primary or secondary menu

    - by user367766
    Hello, Im trying to append an item to a menu that I created. I know its get the primary and secondary menus using "menu_secondary_local_tasks()" etc, and then add items within preprocess page. How would I go about this with a menu that I created? Here is the code I am using to check is the user is logged in... function themeName_check_login() { global $user; if ($user->uid) { print l("Log out, " . $user->name,"logout"); } else { print l("Log In ","user"); } } Thanks in advance, -Matt

    Read the article

  • Tokenizing Twitter Posts in Lucene

    - by Amaç Herdagdelen
    Hello, My question in a nutshell: Does anyone know of a TwitterAnalyzer or TwitterTokenizer for Lucene? More detailed version: I want to index a number of tweets in Lucene and keep the terms like @user or #hashtag intact. StandardTokenizer does not work because it discards the punctuation (but it does other useful stuff like keeping domain names, email addresses or recognizing acronyms). How can I have an analyzer which does everything StandardTokenizer does but does not touch terms like @user and #hashtag? My current solution is to preprocess the tweet text before feeding it into the analyzer and replace the characters by other alphanumeric strings. For example, String newText = newText.replaceAll("#", "hashtag"); newText = newText.replaceAll("@", "addresstag"); Unfortunately this method breaks legitimate email addresses but I can live with that. Does that approach make sense? Thanks in advance! Amaç

    Read the article

  • Automake: How add a building step?

    - by gege2061
    Hello, Currently, I have a build chain, completly manage by automake, like: .vala > .c > .o > .exe I would like add a new step for preprocess a XML file .ui into a vala source: .ui > .vala > .c > .o > .exe I did this, in makefile.am gtkbuilder2vala_SOURCES = \ abstract-window.ui \ main.vala \ $(NULL) And: XSLTPROC = xsltproc .ui.vala: $(XSLTPROC) ... But make don't understand: make: *** No rule to make target `abstract-window.c', needed by `gtkbuilder2vala-abstract-window.o'. Stop. This seems to be a limitation of make: http://www.ensta.fr/~diam/dev/online/autoconf/autobook/autobook_180.html if the translation takes three steps--from .m' to.x', then from .x' to.c', and finally to `.o'---then Automake's simplistic approach will break. Have you another idea?

    Read the article

  • Drupal: How can one use Context to trigger a reaction for a Taxonomy Term condition caused by a Vie

    - by jschrab
    What I would like to do is use the Context module to insert $body_classes of my choosing based on a Taxonomy Term condition. Fine, that's what the Context module is for. Seems simple enough IF your content/page source is a node that is involved with the appropriate Terms. However, I have a page generated by Views that has the appropriate Taxonomy Term id that SHOULD trigger the condition but it doesn't. Now I could set my $body_classes in a "preprocess" in template.php but I'd rather avoid that. Is this even possible in Context? Or am I just doing something wrong?

    Read the article

  • string matching algorithms used by lucene

    - by iamrohitbanga
    i want to know the string matching algorithms used by Apache Lucene. i have been going through the index file format used by lucene given here. it seems that lucene stores all words occurring in the text as is with their frequency of occurrence in each document. but as far as i know that for efficient string matching it would need to preprocess the words occurring in the Documents. example: search for "iamrohitbanga is a user of stackoverflow" (use fuzzy matching) in some documents. it is possible that there is a document containing the string "rohit banga" to find that the substrings rohit and banga are present in the search string, it would use some efficient substring matching. i want to know which algorithm it is. also if it does some preprocessing which function call in the java api triggers it.

    Read the article

  • Drupal adding JS

    - by Andrew
    I know this is not a Drupal forum but this forum is quick at responding so I thought I should give it a shot. Here's my problem. I'm trying to insert reference to the javascript file in the header by using drupal_add_js function. I placed this line inside template preprocess function of template.php. The result....is not working at all. There is no script link in output as it should be. Can anyone tell me what am I doing wrong? Drupal Version - 6x function phptemplate_preprocess_page(&$vars) { $url = drupal_get_path("theme","mysite"); drupal_add_js($url."/jquery.js"); drupal_add_js($url."/drupal.js"); .....

    Read the article

  • PHP - Processing Invalid XML

    - by Paul
    I'm using SimpleXML to load in some xml files (which I didn't write/provide and can't really change the format of). Occasionally (eg one or two files out of every 50 or so) they don't escape any special characters (mostly &, but sometimes other random invalid things too). This creates and issue because SimpleXML with php just fails, and I don't really know of any good way to handle parsing invalid XML. My first idea was to preprocess the XML as a string and put ALL fields in as CDATA so it would work, but for some ungodly reason the XML I need to process puts all of its data in the attribute fields. Thus I can't use the CDATA idea. An example of the XML being: <Author v="By Someone & Someone" /> Whats the best way to process this to replace all the invalid characters from the XML before I load it in with SimpleXML?

    Read the article

  • Drupal 5 Search not working on 404 pages.

    - by easement
    I have a <?php print $search_box; ?> in my page.tpl.php page. On pages that exist, the search works, but on 404 pages, it does not. I saw some bugs/patches threads over at drupal.org for D6.15, but none of them seem to work according to the thread and they weren't really relevant to D5.x I have a theory that the because the <?php print $search_box; ?> creates a form with an action to itself(a non-existant page), it'll get the 404. Has anyone run up against this? If so, how did you fix it? One theory I has was to somehow tap into the form and always make the action="/" (front page) which would always exist. If this is a good idea, how does one go about tapping into the FormAPI and overwriting the action? Is it a preprocess function?

    Read the article

  • Drupal Views pulling Data Fields

    - by askon
    I'm a little new to drupal but have been using things like devel module and theme developer to speed up the learning process. My question, is it possible to theme an entire views BLOCK from a single views tpl.php page OR even a preprocess? When I'm grabbing the $view object I can see results $node-result, it has all of the results, but it doesn't have all my views fields. I'm missing things like, node path, taxonomy titles and paths, etc. From my understanding, Drupal wants you to individually theme EACH output field. It seems rather superfluous to create so many extra templates when I've already got over HALF of my results coming through the $view object Would outputting node over field make this easier? Or am going in the wrong direction with $view-result? Thanks!

    Read the article

  • drupal body class

    - by anru
    drupal version is 6. just want to know where are those $body_classes comes form. I knew in template_preprocess_page , there is a variable called 'body_classes'. but my problem is, not all body_classes are come from preprocess page. for instance: I have a term named 'activities and attractions', then in my page.tpl.php, there is a class 'page-activities and attractions' in my tag. looks like taxonomy module generates a body_class, but i could find it after search seource code of taxonomy module.

    Read the article

  • converting a treebank of vertical trees to s-expressions

    - by Andreas
    I need to preprocess a treebank corpus of sentences with parse trees. The input format is a vertical representation of trees, like so: S =NP ==(DT +def) the == (N +ani) man =VP ==V walks ...and I need it like: (S (NP (DT the) (N man)) (VP (V walks))) I have code that almost does it, but not quite. There's always a missing paren somewhere. Should I use a proper parser, maybe a CFG? The current code is at http://github.com/andreasvc/eodop/blob/master/arbobanko.py The code also contains real examples from the treebank.

    Read the article

  • strsplit in R with metacharacter

    - by user1429852
    I have received a large amount of data where the delimiter is a backslash (obviously a bad choice). I'm processing it in R for computation, and having a hard time finding how to split the string since the backslash is a metacharacter. For example, a string would look like this: "1128\0019\XA5\E2R\366\00=15" and I want to split it along the "\" character, but when I run the strsplit command: strsplit(tempStr, "\") Error in strsplit(tempStr, "\") : invalid regular expression '\', reason 'Trailing backslash' When I try to used the "fixed" option, it does not run because it is expecting something after the backslash: strsplit(tempStr, "\", fixed = TRUE) Unfortunately, I can't preprocess the data with another program because the data is generated daily. Please help and thanks!

    Read the article

  • Freeradius authentication failed for unknown reason

    - by Moein7tl
    I followed this instruction to force freeradius to use mysql database. and run freeradius in debug mod. but it rejects all authentication. mysql database : mysql select * from radcheck; +----+----------+-----------+----+---------+ | id | username | attribute | op | value | +----+----------+-----------+----+---------+ | 1 | test | Password | == | test123 | | 2 | test | Auth-Type | == | Local | +----+----------+-----------+----+---------+ 2 rows in set (0.02 sec) radtest command : # radtest test test123 localhost 0 testing123 Sending Access-Request of id 235 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=235, length=20 radiusd debug mod log: rad_recv: Access-Request packet from host 127.0.0.1 port 51034, id=235, length=74 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0xbf111cbbae24fb0f0a558bfa26f53476 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop ++[mschap] returns noop ++[digest] returns noop [suffix] No '@' in User-Name = "test", looking up realm NULL [suffix] No such realm "NULL" ++[suffix] returns noop [eap] No EAP-Message, not doing EAP ++[eap] returns noop ++[files] returns noop ++[expiration] returns noop ++[logintime] returns noop [pap] WARNING! No "known good" password found for the user. Authentication may fail because of this. ++[pap] returns noop ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user Failed to authenticate the user. Using Post-Auth-Type Reject # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} - test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 20 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 20 Sending Access-Reject of id 235 to 127.0.0.1 port 51034 Waking up in 4.9 seconds. Cleaning up request 20 ID 235 with timestamp +4325 Ready to process requests. where is the problem and how should I solve it?

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • Eclipse Blackberry Preprocessor Not Working?

    - by Jessica
    I've already followed the directions @ http://stackoverflow.com/questions/1383277/using-preprocessor-directives-in-blackberry-jde-plugin-for-eclipse for making sure the blackberry plugin preprocessing hook is (theoretically) enabled. I'm using Eclipse 3.5.1 with Blackberry Plugin 1.1 with BB SDKs 4.7.0 and 4.6.0. I have my preprocessor defines set (and I've tried in both the Project's Blackberry Properties as well as the Workspace Blackberry Build settings), and checked their capitalization and spelling carefully too. I'm fairly confident the actual code to say "this stuff should be preprocessed" is good, because including/excluding preprocessed code seems to work fine on command line builds: //#preprocess --- at beginning of file and then code blocks like this throughout: //#ifndef jde_4_7 /* //#endif //#ifdef jde_4_7 import net.rim.device.api.ui.TouchEvent; //#endif //#ifndef jde_4_7 */ //#endif So what I can't figure out what else could be wrong that would cause Eclipse to not compile in my preprocessed code unless I remove the comments that are supposed to prevent the touch code from building into a build for blackberries that don't support touch. At one point it used to work (and no I haven't updated Eclipse), but sometime in the last couple of weeks it seemed to just stop working. And I'm getting kind of tired of the error-prone process of searching for ifdefs and manually commenting/uncommenting touch code and looking for a better solution while I do testing and initial development requiring testing both touch and non-touch functionality. Any other ideas on what could be wrong or how to fix it?

    Read the article

  • drupal_add_css not working

    - by hfidgen
    Hiya, I need to use drupal_add_css to call stylesheets onto single D6 pages. I don't want to edit the main theme stylesheet as there will be a set of individual pages which all need completely new styles - the main sheet would be massive if i put it all in there. My solution was to edit the page in PHP editor mode and do this: <?php drupal_add_css("/styles/file1.css", "theme"); ?> <div id="newPageContent">stuff here in html</div> But when i view source, there is nothing there! Not even a broken css link or anything, it's just refusing to add the css sheet to the css package put into the page head. Variations don't seem to work either: drupal_add_css($path = '/styles/file1.css', $type = 'module', $media = 'all', $preprocess = TRUE) My template header looks like this, i've not changed anything from the default other than adding a custom JS. <head> <?php print $head ?> <title><?php print $head_title ?></title> <?php print $styles ?> <?php print $scripts ?> <script type="text/javascript" src="<?php print base_path() ?>misc/askme.js"></script> <!--[if lt IE 7]> <?php print phptemplate_get_ie_styles(); ?> <![endif]--> </head> Can anyone think of a reason why this function is not working? Thanks!

    Read the article

  • How to get forkpty to handle redirection and other bash-isms?

    - by Jeremy Friesner
    Hi all, I've got a GUI C++ program that takes a shell command from the user, calls forkpty() and execvp() to execute that command in a child process, while the parent (GUI) process reads the child process's stdout/stderr output and displays it in the GUI. This all works nicely (under Linux and MacOS/X). For example, if the user enters "ls -l /foo", the GUI will display the contents of the /foo folder. However, bash niceties like output redirection aren't handled. For example, if the user enters "echo bar /foo/bar.txt", the child process will output the text "bar /foo/bar.txt", instead of writing the text "bar" to the file "/foo/bar.txt". Presumably this is because execvp() is running the executable command "echo" directly, instead of running /bin/bash and handing it the user's command to massage/preprocess. My question is, what is the correct child process invocation to use, in order to make the system behave exactly as if the user had typed in his string at the bash prompt? I tried wrapping the user's command with a /bin/bash invocation, like this: /bin/bash -c the_string_the_user_entered, but that didn't seem to work. Any hints?

    Read the article

  • How do I best do balanced quoting with Perl's Regexp::Grammars?

    - by Evan Carroll
    Using Damian Conway's Regexp::Grammars, I'm trying to match different balanced quoting ('foo', "foo", but not 'foo") mechanisms -- such as parens, quotes, double quotes, and double dollars. This is the code I'm currently using. <token: pair> \'<literal>\'|\"<literal>\"|\$\$<literal>\$\$ <token: literal> [\S]+ This generally works fine and allows me to say something like: <rule: quote> QUOTE <.as>? <pair> My question is how do I reform the output, to exclude the needles notation for the pair token? { '' => 'QUOTE AS \',\'', 'quote' => { '' => 'QUOTE AS \',\'', 'pair' => { 'literal' => ',', '' => '\',\'' } } }, Here, there is obviously no desire to have pair in between, quote, and the literal value of it. Is there a better way to match 'foo', "foo", and $$foo$$, and maybe sometimes ( foo ) without each time creating a needless pair token? Can I preprocess-out that token or fold it into the above? Or, write a better construct entirely that eliminates the need for it?

    Read the article

  • Nokogiri changing custom elements

    - by dagda1
    Hi, I have sample html that I have marked up with some special tags that will be used by a different program, an example of the html is below. You should note the <START:organization>..<END> elements. <html> <head/> <body> <ul> <li> <START:organization> Advanced Integrated Pest Management <END> </li> <li> <START:organization> American Bakers Association <END> </li> </ul> </body> </html> I wanted to use nokogiri to preprocess the html to easily remove irrelevant tags like <script>. I created the following extension to the nokogiri document class: module Nokogiri module HTML class Document def prepare_html xpath("//script").remove to_html.remove_new_lines end end end end The problem is that nokogiri is changing the <START:organization> element to <organization>. Is there anyway that I can preserve the htnl to maintain my custom markup tags? Thanks Paul

    Read the article

  • How to marshal an object and its content (also objects)

    - by Waldo Spek
    I have a question for which I suspect the answer is a bit complex. At this moment I am programming a DLL (class library) in C#. This DLL uses a 3rd party library and therefore deals with 3rd party objects of which I do not have the source code. Now I am planning to create another DLL, which is going to be used in a later stadium in my application. This second DLL should use the 3rd party objects (with corresponding object states) created by the first DLL. Luckily the 3rd party objects extend the MarshalByRefObject class. I can marshal the objects using System.Runtime.Remoting.Marshal(...). I then serialize the objects using a BinaryFormatter and store the objects as a byte[] array. All goes well. I can deserialize and unmarshal in a the opposite way and end up with my original 3rd party objects...so it appears... Nevertheless, when calling methods on my 3rd party deserialized objects I get object internal exceptions. Normally these methods return other 3rd party objects, but (obviously - I guess) now these objects are missing because they weren't serialized. Now my global question: how would I go about marshalling/serializing all the objects which my 3rd party objects reference...and cascade down the "reference tree" to obtain a full and complete serialized object? Right now my guess is to preprocess: obtain all the objects and build my own custom object and serialize it. But I'm hoping there is some other way...

    Read the article

< Previous Page | 1 2 3  | Next Page >