Search Results

Search found 20833 results on 834 pages for 'oracle advice'.

Page 783/834 | < Previous Page | 779 780 781 782 783 784 785 786 787 788 789 790  | Next Page >

  • Rexml - Parsing Data

    - by Paddy
    I have a XML File in the following format: <?xml version='1.0' encoding='UTF-8'?> <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gwo='http://schemas.google.com/analytics/websiteoptimizer/2009' xmlns:app='http://www.w3.org/2007/app' xmlns:gd='http://schemas.google.com/g/2005' gd:etag='W/&quot;DUYGRX85fCp7I2A9WxFWEkQ.&quot;'><id>https://www.google.com/analytics/feeds/websiteoptimizer/experiments/1025910</id><updated>2010-05-31T02:12:04.124-07:00</updated><app:edited>2010-05-31T02:12:04.124-07:00</app:edited><title>Flow Experiment</title><link rel='gwo:goalUrl' type='text/html' href='http://cart.personallifemedia.com/dlg/download.php'/><link rel='alternate' type='text/html' href='https://www.google.com/websiteoptimizer'/><link rel='self' type='application/atom+xml' href='https://www.google.com/analytics/feeds/websiteoptimizer/experiments/1025910'/><gwo:analyticsAccountId>16334726</gwo:analyticsAccountId><gwo:autoPruneMode>None</gwo:autoPruneMode><gwo:controlScript>..... I have to parse and get the data for gd:etag and how do I do it? I was able to get the value using SimpleXML, but i wanted to achieve it in ReXML. Please do advice.

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • XSLT: Add namespace to root element

    - by Ingrid
    I need to change namespaces in the root element as follows: input document: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <foo xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> desired output: <foo audience="external" xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9"> I was trying to do it as I copy over the whole document and before I give any other transformation instructions, but the following doesn't work: <xsl:template match="* | processing-instruction() | comment()"> <xsl:copy copy-namespaces="no"> <xsl:for-each select="."> <xsl:attribute name="audience" select="'external'"/> <xsl:namespace name="xlink" select="'http://www.w3.org/1999/xlink'"/> </xsl:for-each> <xsl:apply-templates/> <xsl:copy-of select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> Thanks for any advice!

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Including/Organzing HTML in large javascript project

    - by Bill Zimmerman
    Hi, I've a got a fairly large web app, with several mini applets on each page. These applets are almost always identical jquery apps. I am looking for advice on how I should organize/include smaller parts of these jquery apps within my larger project. For example, each app has several independent tabs. If possible, I would like to store each of the tabs as a seperate .html file because this makes development easier. My requirements are: 1) All of the html 'tabs' are loaded on the clients end when the page loads. I would like to avoid any delays by dynamically requesting the tab html. 2) If possible, I would like to minimize the raw data sent. For example, it would be preferable to send each tab 1 time, instead of sending each tab 10 times if there are ten applets on that page. Questions: 1) What are my options for 'including' the HTML files / javascript code 2) Any tips for keeping my development simple in this situation? Surely there has to be a better way than just editing one massive html file when working with large pages.

    Read the article

  • How to create XSD schema from XML with this kind of structure (in .net)?

    - by Mr. Brownstone
    Here's the problem: my input is XML file that looks something like: <BaseEntityClassInfo> <item> <key>BaseEntityClassInfo.SomeField</key> <value>valueData1</value> </item> <item> <key>BaseEntityClassInfo.AdditionalDataClass.SomeOtherField</key> <value>valueData2</value> </item> <item> <key>BaseEntityClassInfo.AdditionalDataClass.AnotherClassInfo.DisplayedText</key> <value>valueData3</value> </item> ... ... </BaseEntityClassInfo> The <key> element somehow describes entity classes fields and relationships (used in some other app that I don't have access to) and the <value> stores the actual data that I need. My goal is to programatically generate a typed Dataset from this XML that could then be used for creating reports. I thought of building some XSD schema from input XML file first and then use this schema to generate Dataset but I'm not sure how to do that. The problem is that I don't want all data in one table, I need several tables with relationships based on the <key> value so I guess I need to infer relational structure from XML <key> data in some way. So what do you think? How could this be done and what would be the best approach? Any advice, ideas, suggestions would be appreciated!

    Read the article

  • Leveraging hobby experience to get a job

    - by Bernard
    Like many other's I began programming at an early age. I started when I was 11 and I learned C when I was 14 (now 26). While most of what I did were games just to entertain myself I did everything from low level 2D graphics, and binary I/O, to interfacing with free API's, custom file systems, audio, 3D animations, OpenGL, web sites, etc. I worked on a wide variety of things trying to make various games. Because of this experience I have tested out of every college level C/C++ programming course I have ever been offered. In the classes I took, my classmates would need a week to do what I finished in class with an hour or two of work. I now have my degree now and I have 2 years of experience working full time as a web developer however I would like to get back into C++ and hopefully do simulation programming. Unfortunately I have yet to do C++ as a job, I have only done it for testing out of classes and doing my senior project in college. So most of what I have in C++ is still hobby experience and I don't know how to best convey that so that I don't end up stuck doing something too low level for me. Right now I see a job offer that requires 2 years of C++ experience, but I have at least 9 (I didn't do C++ everyday for the last 14 years). How do I convey my experience? How much is it truly worth? and How do I get it's full value? The best thing that I can think of is a demo and a portfolio, however that only comes into play after an interview has been secured. I used a portfolio to land my current job. All answers and advice are appreciated.

    Read the article

  • How can I share an entity framework model across website users

    - by richardmoss
    Hello, Currently my website is based around MVC and the Entity Framework running against a SQL Server 2005 database. So far, it has all been running very smoothly, and I really enjoy MVC and its slimmer more concise code (and no huge viewstates or soul destroying postbacks ;)) Recently I was working on upgrading the site to use a simple forum system, and this is where I started running into problems. When I was testing the site using two different browsers, if I created or replied to a post in one browser, the other browser couldn't see the post. At the moment, each visitor to the site gets their own copy of the entity model, which I store in their session data. Obviously this is the problem as updates to one model aren't getting carried to the other. As a test, I tried storing a single copy of the model which all visitors would access by assigning the model to a static variable. This worked, and both browsers could see each others modifications. However, it had its side effects. For example, if I fired up both browsers at the same time and the model was initialized, one browser would crash, and the other would work fine, despite me using a locking object so in theory one of them should have been delayed until the model was ready (of course I could have implemented this wrong ;)). Also, originally this site did use one model for all visitors and when it was live, it frequently shut down - killing the IIS application pool while it did. Now I'm not sure if this was related, but I don't really want to reintroduce whatever bug I had that caused this shut down. So, my question is a simple one really - what is the best way of either using the same model for all website users so they all see updates, or if they do have separate copies (which I imagine will have a performance impact in time) how can the models detect changes in the database and update themselves according. Thanks in advance for any advice! Regards; Richard Moss

    Read the article

  • NUnit for VS has suddenly bombed.. Anyone else experience this?

    - by Ian P
    I'm getting the following set of errors in a project, that previously worked fine, from NUnit for VS when I try to run either individual or all of the tests in a given solution. Error loading C:\Path to Application\Application\Application.ApplicationTests\bin\Debug\Application.ApplicationTests.dll: The method or operation is not implemented. Error loading C:\Path to Application\Application\Application.FileDetectorTests\bin\Debug\FileDetectorTests.dll: The method or operation is not implemented. Error loading C:\Path to Application\Application\Application.PresentationTests\bin\Debug\Application.PresentationTests.dll: The method or operation is not implemented. Error loading C:\Path to Application\Application\Application.DomainTests\bin\Debug\Application.DomainTests.dll: The method or operation is not implemented. I've verified that each project is setup with the appropriate ProjectTypeGuids for a test project in the Project file. I've tried uninstalling / reinstalling NUnit for VS, but have had no luck. Does anyone have any advice as to how I might start troubleshooting this? If I open each individual test project outside of the main solution (that includes all projects, by the way,) and save it as it's own solution, they run just fine. Nothing of note has changed since this stopped working. Thanks! Ian

    Read the article

  • How should I name a native DLL distributed in both 32-bit and 64-bit form?

    - by Spike0xff
    I have a commercial product that's a DLL (native 32-bit code), and now it's time to build a 64-bit version of it. So when installing on 64-bit Windows, the 32-bit version goes into Windows\SysWOW64, and the 64-bit version goes into... Windows\System32! (I'm biting my tongue here...) Or the DLL(s) can be installed alongside the client application. What should I name the 64-bit DLL? Same name as 32-bit: Two files that do the same thing, have the same name, but are totally non-interchangeable. Isn't that a recipe for confusion and support problems? Different names (e.g. product.dll and product64.dll): Now client applications have to know whether they are running 32-bit or 64-bit in order to reference my DLL, and there are languages where that isn't known until run-time - .NET being just one example. And now all the statically compiled clients have to conditionalize the import declarations: IF target=WIN64 THEN import Blah from "product64.dll" ELSE import Blah from "product.dll" ENDIF The product contains massive amounts of C code, and a large chunk of C++ - porting it to C# is not an option. Advice? Suggestions?

    Read the article

  • Compiling a click-once app that requires administrator?

    - by Assimilater
    Hi, a lot of my programs require the ability to write files to the hard drive. When I first made these programs for XP they worked great. Now I'm less ignorant about UAC (got a new laptop recently). And for future customers...I've noticed the potential for a LOT of annoying error messages....and quite frankly if the program can't write data to the hard drive or thumb drive it's on...there's no point to running it.... I've tried multiple times to build in the manifest a requirement for administrator or user access....I'm not sure if anything less would solve the problem...but have failed because click-once has security features in place to prevent me from doing so. I'd rather not have to tell my customers how to make the program run as an administrator by editing the file's properties...I'd much rather have a convenient pop up like what you'd see new programs such as Itunes or Filezilla show if they were in conflict with UAC requesting the privileges they need. I'd really like to do this but have had little success. Any and all advice that can remedy this grievous problem appreciated. Thanks.

    Read the article

  • Semantically correct XHTML markup

    - by Dori
    Hello all. Just trying to get the hang of using the semantically correct XHTML markup. Just writing the code for a small navigation item. Where each button has effectivly a title and a descrption. I thought a definition list would therefore be great so i wrote the following <dl> <dt>Import images</dt> <dd>Read in new image names to database</dd> <dt>Exhibition Management</dt> <dd>Create / Delete an exhibition </dd> <dt>Image Management</dt> <dd>Edit name, medium and exhibition data </dd> </dl> But...I want the above to be 3 buttons, each button containing the dt and dd text. How can i do this with the correct code? Normally i would make each button a div and use that for the visual button behaviour (onHover and current page selection stuff). Any advice please Thanks

    Read the article

  • What's the best way to only output a tag if it exists in XSL?

    - by Morinar
    I'm working on an interface with a 3rd party app that basically needs to take XML that was spat out by the app and convert it into XML our system can deal with. It's basically just applying a stylesheet to the original XML to make it looks like "our" XML. I've noticed that in other stylesheets we have, there are constructs like this: <xsl:for-each select="State"> <StateAbbreviation> <xsl:value-of select="."/> </StateAbbreviation> </xsl:for-each> Basically, the "in" XML has a State tag that I need to output as our recognized StateAbbreviation tag. However, I want to ONLY output the StateAbbreviation tag if the "in" XML contains the State tag. The block above accomplishes this just fine, but is not very intuitive (at least it wasn't to me), as every time I see a for-each I assume there is more than one, whereas in these cases there is 0 or 1. My question: is that a standard-ish construct? If not, is there a more preferred way to do it? I could obviously check the string length (which is also being done in other stylesheets), but would like to do it the same, "best" way everywhere (assuming of course that a "best" way exists. Advice? Suggestions?

    Read the article

  • C++ syntax issue

    - by Doug
    It's late and I can't figure out what is wrong with my syntax. I have asked other people and they can't find the syntax error either so I came here on a friend's advice. template <typename TT> bool PuzzleSolver<TT>::solve ( const Clock &pz ) { possibConfigs_.push( pz.getInitial() ); vector< Configuration<TT> > next_; //error is on next line map< Configuration<TT> ,Configuration<TT> >::iterator found; while ( !possibConfigs_.empty() && possibConfigs_.front() != pz.getGoal() ) { Configuration<TT> cfg = possibConfigs_.front(); possibConfigs_.pop(); next_ = pz.getNext( cfg ); for ( int i = 0; i < next_.size(); i++ ) { found = seenConfigs_.find( next_[i] ); if ( found != seenConfigs_.end() ) { possibConfigs_.push( next_[i] ); seenConfigs_.insert( make_pair( next_[i], cfg ) ); } } } } What is wrong? Thanks for any help.

    Read the article

  • Is the REST support in Spring 3's MVC Framework production quality yet?

    - by glenjohnson
    Hi all, Since Spring 3 was released in December last year, I have been trying out the new REST features in the MVC framework for a small commercial project involving implementing a few RESTful Web Services which consume XML and return XML views using JiBX. I plan to use either Hibernate or JDBC Templates for the data persistence. As a Spring 2.0 developer, I have found Spring 3's (and 2.5's) new annotations way of doing things quite a paradigm shift and have personally found some of the new MVC annotation features difficult to get up to speed with for non-trivial applications - as such, I am often having to dig for information in forums and blogs that is not apparent from going through the reference guide or from the various Spring 3 REST examples on the web. For deadline-driven production quality and mission critical applications implementing a RESTful architecture, should I be holding off from Spring 3 and rather be using mature JSR 311 (JAX-RS) compliant frameworks like RESTlet or Jersey for the REST layer of my code (together with Spring 2 / 2.5 to tie things together)? I had no problems using RESTlet 1.x in a previous project and it was quite easy to get up to speed with (no magic tricks behind the scenes), but when starting my current project it initially looked like the new REST stuff in Spring 3's MVC Framework would make life easier. Do any of you out there have any advice to give on this? Does anyone know of any commercial / production-quality projects using, or having successfully delivered with, the new REST stuff in Spring 3's MVC Framework. Many thanks Glen

    Read the article

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • Is ADO.NET Entity framework database schema update possible?

    - by fyasar
    I'm working on proof of concept application like crm and i need your some advice. My application's data layer completely dynamic and run onto EF 3.5. When the user update the entity, change relation or add new column to the database, first i'm planning make for these with custom classes. After I rebuild my database model layer with new changes during the application runtime. And my model layer tie with tightly coupled to my project for easy reflecting model layer changes (It connected to my project via interfaces and loading onto to application domain in the runtime). I need to create dynamic entities, create entity relations and modify them during the runtime after that i need to create change database script for updating database schema. I know ADO.NET team says "we will be able to provide this property in EF 4.0", but i don't need to wait for them. How can i update database changes during the runtime via EF 3.5 ? For example, i need to create new entity or need to change some entity schema, add new properties or change property types after than how can apply these changes on the physical database schema ? Any ideas ?

    Read the article

  • How to read time from recorded surveillance camera video?

    - by stressed_geek
    I have a problem where I have to read the time of recording from the video recorded by a surveillance camera. The time shows up on the top-left area of the video. Below is a link to screen grab of the area which shows the time. Also, the digit color(white/black) keeps changing during the duration of the video. http://i55.tinypic.com/2j5gca8.png Please guide me in the direction to approach this problem. I am a Java programmer so would prefer an approach through Java. EDIT: Thanks unhillbilly for the comment. I had looked at the Ron Cemer OCR library and its performance is much below our requirement. Since the ocr performance is less than desired, I was planning to build a character set using the screen grabs for all the digits, and using some image/pixel comparison library to compare the frame time with the character-set which will show a probabilistic result after comparison. So I was looking for a good image comparison library(I would be OK with a non-java library which I can run using the command-line). Also any advice on the above approach would be really helpful.

    Read the article

  • Which is faster for large "for" loop: function call or inline coding?

    - by zaplec
    Hi, I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module. That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times. I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops? The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better. Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated. -zaplec

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • How to clean completly select2 control?

    - by Candil
    I'm working with the awesome select2 control. I'm trying to clean and disable the select2 with the content too so I do this: $("#select2id").empty(); $("#select2id").select2("disable"); Ok, it works, but if i had a value selected all the items are removed, the control is disabled, but the selected value is still displayed. I want to clear all content so the placeholder would be showed. Here is a example I did where you can see the issue: http://jsfiddle.net/BSEXM/ HTML: <select id="sel" data-placeholder="This is my placeholder"> <option></option> <option value="a">hello</option> <option value="b">all</option> <option value="c">stack</option> <option value="c">overflow</option> </select> <br> <button id="pres">Disable and clear</button> <button id="ena">Enable</button> Code: $(document).ready(function () { $("#sel").select2(); $("#pres").click(function () { $("#sel").empty(); $("#sel").select2("disable"); }); $("#ena").click(function () { $("#sel").select2("enable"); }); }); CSS: #sel { margin: 20px; } Do you have any idea or advice to this?

    Read the article

  • Handle window scroll event in greasemonkey script

    - by Akim Khalilov
    Hi. I need some advice. I have a web page and want to extend it's functionality with greasemonkey script and firefox. When page has loaded I need run custom function during user's page scrolling (with mouse whell or scrollbar). I want show some div block when user scrolling down and hide it when he scrolling to the top. But I met some problem - I couldn't assign event handler to the onscroll event. I use next part of the code: function showFixedBlock(){ ... } function onScrollStart(){ ... showFixedBlock(); ... } window.onscroll = onScrollStart; I test this piece of code on my test html page and it works, but when I copy it into greasemonkey, script doesn't work. Should I assign onscroll event handler during page loading? As I know greasemonkey execute it's scripts when page has loaded? Is it the reason of the problem? Is there some additional requirments to handle 'onscroll' event? How can I do that? Thanks.

    Read the article

  • Vehicle License Plate Detection

    - by Ash
    Hey all Basically for my final project at university, I'm developing a vehicle license plate detection application. Now I consider myself an intermediate programmer, however my mathematics knowledge lacks anything above secondary school, therefore producing detection formulae is basically impossible. I've spend a good amount of time looking up academic papers such as: http://www.scribd.com/doc/266575/Detecting-Vehicle-License-Plates-in-Images http://www.cic.unb.br/~mylene/PI_2010_2/ICIP10/pdfs/0003945.pdf http://www.eurasip.org/Proceedings/Eusipco/Eusipco2007/Papers/d3l-b05.pdf When it comes to the maths, I'm lost. Due to this testing various graphic images proved productive, for example: to However this approach is only catered to that particular image, and if the techniques were applied to different images, I'm sure a different, most likely poorer conversion would occur. I've read about a formula called the bottom hat morphology transform, which according to the first does the following: "Basically, the trans- formation keeps all the dark details of the picture, and eliminates everything else (including bigger dark regions and light regions)." Sadly I can't find much information on this, however the image within the documentation near the end of the report shows it's effectiveness. I'm aware this is complicated and vast, I'd just appreciate a little advice, even in terms of what transformation techniques I should focus on developing, or algorithm regarding edge detection or pixel detection. Few things I need to add Developing in C Sharp Confining the project to UK registration plates only I can basically choose the images to convert as a demonstration Thanks

    Read the article

  • asp.net mvc stand alone ascx control how do i link (css and js) most efficiently

    - by Julian
    Hi, I need some advice. I have developed some asp.net mvc web pages. Each page has a master and some ascx controls (between 2 - 6) embedded into it a js and css file. Up to now every thing was fine. In order to improve modularity, flexibility and testability the ascx's are now expected to be able to work as stand alone controls. (Each ascx has also got its own css and js files in some cases it has another control inside it) In order to meet this requirement we call the controller with the relevant parameters and it returns the ascx (partial) directly to the browser without all of the other parts of the original page . In order to get it to display correctly (css) and act correctly (js/jquery) all of the relevant files need to be added (as links or scripts eg. href="<%= ResolveUrl(styleSheet)%>") to the user control. This is "contradicting" the concept of positioning the files at the most logical place (could be the master page for example). How can I overcome this problem? Keep in mind that this is relevant for each "control" ascx file. Any thoughts will be appreciated.

    Read the article

  • Using twig variable to dynamically call an imported macro sub-function

    - by Chausser
    I am attempting if use a variable to call a specific macro name. I have a macros file that is being imported {% import 'form-elements.html.twig' as forms %} Now in that file there are all the form element macros: text, textarea, select, radio etc. I have an array variable that gets passed in that has an elements in it: $elements = array( array( 'type'=>'text, 'value'=>'some value', 'atts'=>null, ), array( 'type'=>'text, 'value'=>'some other value', 'atts'=>null, ), ); {{ elements }} what im trying to do is generate those elements from the macros. they work just fine when called by name: {{ forms.text(element.0.name,element.0.value,element.0.atts) }} However what i want to do is something like this: {% for element in elements %} {{ forms[element.type](element.name,element.value,element.atts) }} {% endfor %} I have tried the following all resulting in the same error: {{ forms["'"..element.type.."'"](element.name,element.value,element.atts) }} {{ forms.(element.type)(element.name,element.value,element.atts) }} {{ forms.{element.type}(element.name,element.value,element.atts) }} This unfortunately throws the following error: Fatal error: Uncaught exception 'LogicException' with message 'Attribute "value" does not exist for Node "Twig_Node_Expression_GetAttr".' in Twig\Environment.php on line 541 Any help or advice on a solution or a better schema to use would be very helpful.

    Read the article

< Previous Page | 779 780 781 782 783 784 785 786 787 788 789 790  | Next Page >