Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 233/267 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • Efficiency of manually written loops vs operator overloads (C++)

    - by Sagekilla
    Hi all, in the program I'm working on I have 3-element arrays, which I use as mathematical vectors for all intents and purposes. Through the course of writing my code, I was tempted to just roll my own Vector class with simple +, -, *, /, etc overloads so I can simplify statements like: for (int i = 0; i < 3; i++) r[i] = r1[i] - r2[i]; // becomes: r = r1 - r2; Which should be more or less identical in generated code. But when it comes to more complicated things, could this really impact my performance heavily? One example that I have in my code is this: Manually written version: for (int j = 0; j < 3; j++) { p.vel[j] = p.oldVel[j] + (p.oldAcc[j] + p.acc[j]) * dt2 + (p.oldJerk[j] - p.jerk[j]) * dt12; p.pos[j] = p.oldPos[j] + (p.oldVel[j] + p.vel[j]) * dt2 + (p.oldAcc[j] - p.acc[j]) * dt12; } Using a Vector class with operator overloads: p.vel = p.oldVel + (p.oldAcc + p.acc) * dt2 + (p.oldJerk - p.jerk) * dt12; p.pos = p.oldPos + (p.oldVel + p.vel) * dt2 + (p.oldAcc - p.acc) * dt12; I am compiling my code for maximum possible speed, as it's extremely important that this code runs quickly and calculates accurately. So will me relying on my Vector's for these sorts of things really affect me? For those curious, this is part of some numerical integration code which is not trivial to run in my program. Any insight would be appreciated, as would any idioms or tricks I'm unaware of.

    Read the article

  • getting data from dynamic schema

    - by coure2011
    I am using mongoose/nodejs to get data as json from mongodb. For using mongoose I need to define schema first like this var mongoose = require('mongoose'); var Schema = mongoose.Schema; var GPSDataSchema = new Schema({ createdAt: { type: Date, default: Date.now } ,speed: {type: String, trim: true} ,battery: { type: String, trim: true } }); var GPSData = mongoose.model('GPSData', GPSDataSchema); mongoose.connect('mongodb://localhost/gpsdatabase'); var db = mongoose.connection; db.on('open', function() { console.log('DB Started'); }); then in code I can get data from db like GPSData.find({"createdAt" : { $gte : dateStr, $lte: nextDate }}, function(err, data) { res.writeHead(200, { "Content-Type": "application/json", "Access-Control-Allow-Origin": "*" }); var body = JSON.stringify(data); res.end(body); }); How to define scheme for a complex data like this, you can see that subSection can go to any deeper level. [ { 'title': 'Some Title', 'subSection': [{ 'title': 'Inner1', 'subSection': [ {'titile': 'test', 'url': 'ab/cd'} ] }] }, .. ]

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • java url connection, wait for data being sent through the outputstream

    - by Mateu
    I'm writting a java class that tests uploading speed connection to a server. I want to check how many data can be send in 5 seconds. I've written a class which creates a URL, creates a connection, and sends data trough the outPutStream. There is a loop where I writte data to the stream for 5 seconds. However I'm not able to see when data has been send (I writte data to the output stream, but data is not send yet). How can I wait untill data is really sent to the server? Here goes my code (which does not work): URL u = new URL(url) HttpURLConnection uc = (HttpURLConnection) u.openConnection(); uc.setDoOutput(true); uc.setDoInput(true); uc.setUseCaches(false); uc.setDefaultUseCaches(false); uc.setRequestMethod("POST"); uc.setRequestProperty("Content-Type", "application/octet-stream"); uc.connect(); st.start(); // Send the request OutputStream os = uc.getOutputStream(); //This while is incorrect cause it does not wait for data beeing sent while (st.getElapsedTime() < miliSeconds) { os.write(buffer); os.flush(); st.addSize(buffer.length); } os.close(); Thanks

    Read the article

  • PHP driven site needs password change.

    - by Drea
    I have inherited a website that needs the password changed that accesses the database. I can see that there are two tables within the database but neither of them have username or password info. The previous web guy moved out of the country and can't be reached. I am not up-to-speed enough to figure this out. I have gone through all the files to try and find the answer but can't get it. It's hosted by goDaddy.com and I have changed the passwords there but it didn't change this login info. www.executivehomerents.com/cpanel <-this brings up the prompt for the username & password which I won't give out but the page only gives you 5 choices and none of them deal with changing the password. They are simply to change the data in the tables. If you go to: http://www.websitedatabases.com/ <=this is the company that the PHPMagic program was purchased from-- They have no contact number. Here is another page that might help: http://www.executivehomerents.com/cpanel/dbinput/setup.php I don't think I'll get an answer to this but it's worth a try... thanks.

    Read the article

  • In an Android application, should I have one content provider per table or only one for the entire a

    - by Andrew Dyer
    I have years of experience with Microsoft .NET development (primarily C#) and have been working to come up to speed on Android and Java. So far, I've built a small application with a couple screens and a working content provider. All of the examples I've seen for developing content providers typically work with a single table, so I got the impression that this was the convention. I built a couple more content providers for other tables and ran into the "Unknown URI" IllegalArgumentException when I tried to test them. The exception is being thrown by one of my content providers, but not the one I was intending to call. It appears that my application is using the first content provider in the AndroidManifest.xml file, which now has me wondering if I should only have a single content provider for the entire application. Are there any best practices and/or examples for working with multiple tables in an Android application? Should I have one content provider per table or only one for the entire application? If the former, how do I resolve URIs to the proper provider? If the latter, how do I keep my content provider code from being polluted with switch statements?

    Read the article

  • What are CAD apps written in, and how are they organized ?

    - by ldigas
    What are CAD applications (Rhino, Autocad) of today written in and how are they organized internally ? I gave as an example, Autocad and Rhino, although I would love to hear of other examples as well. I'm particularly interested in knowing what is their backend written in (multilanguage ?) and how is it organized, and how do they handle their frontend (GUI) in real time ? Do they use native windows API's or some libraries of their own, since I imagine, as good as may be, the open source solutions on today's market won't cut it. I may be wrong ... As most of you who have used them know, they handle amongs other things relatively complex rotational operations in realtime (shading is not interesting me). I've been doing some experiments with several packages recently, and for some larger models found that there is considerable difference in speed in, for example, programed rotation (big full ship models) amongst some of them (which I won't name). So I'm wondering about their internals ... Also, if someone knows of some book on the subject, I'd be interested to hear of it.

    Read the article

  • Concept: Is mongo right for applying schemas?

    - by Jan
    I am currently in charge of checking wether it is valuable for one of our upcoming products to be developed on mongo. Without going too much into detail, I'll try to explain, what the app does. The app simply has "entities". These entities are technical stuff, like cell phones, TVs, Laptops, tablet pcs, and so forth. Of course, a cell phone has other attributes than a Tablet PCs and a Laptop has even other attributes, like RAM, CPU, display size and so on. Now I want to have something that we wanna call a scheme: We define that we need to have saved the display size, amount of ram size of flash devices, processor type, processor speed and so on for tablet pcs. For cell phone we might save display size, GSM, Edge, 3g, 4g, processor, ram, touch screen technology, bla bla bla. I think you got it :) What I want to realize is, that each "category" has a schema and when one of the system's users enters a new product (let's say the new iphone 4), the app constructs the form to be filled out with the appropriate attributes. So far it sounds nice and should not be a problem with mongo. But now the tough for which I could not find a clean solution.... An attribute modeled in mongo looks like: { _id: 1234456, name: "Attribute name", type: 0, "description" } But what to do, if i need this attribute in several languages, like: { en: {name: "Attribute name", type: 0, "description"}, de: {name: "Name des Attributs, type: 0, "Beschreibung"} } I also need to ensure that the german attribute gets updated as soon as the english gets updated, for instance when type changes from 0 to 1. Any ideas on that?

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • Android: Speeding up display of (html-formatted) text

    - by prepbgg
    My app uses a StringBuilder to assemble paragraphs of text which are then displayed in a TextView within a ScrollView. The displaytext.xml layout file is: <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:id="@+id/LinearLayout01" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#FFFFFF" xmlns:android="http://schemas.android.com/apk/res/android"> <ScrollView android:id="@+id/ScrollView01" android:layout_width="fill_parent" android:layout_height="wrap_content" > <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/display_text" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="#000000" > </TextView> </ScrollView> </LinearLayout> and the code that displays the StringBuilder object sbText is setContentView(R.layout.displaytext); TextView tv = (TextView)findViewById(R.id.display_text); tv.setText(Html.fromHtml(sbText.toString())); This works OK, except that it gets very slow as the amount of text grows. For example, to display 50 paragraphs totalling about 50KB of text takes over 5 seconds just to execute those three lines of code. Can anyone suggest how I can speed this up, please?

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • "Finding" an object instance of a known class?

    - by Sean C
    My first post here (anywhere for that matter!), re. Cocoa/Obj-C (I'm NOT up to speed on either, please be patient!). I hope I haven't missed the answer already, I did try to find it. I'm an old-school procedural dog (haven't done any programming since the mid 80's, so I probably just can't even learn new tricks), but OOP has my head spinning! My question is: is there any means at all to "discover/find/identify" an instance of an object of a known class, given that some OTHER unknown process instantiated it? eg. somthing that would accomplish this scenario: (id) anObj = [someTarget getMostRecentInstanceOf:[aKnownClass class]]; for that matter, "getAnyInstance" or "getAllInstances" might do the trick too. Background: I'm trying to write a plugin for a commercial application, so much of the heavy lifting is being done by the app, behind the scenes. I have the SDK & header files, I know what class the object is, and what method I need to call (it has only instance methods), I just can't identify the object for targetting. I've spent untold hours and days going over Apples documentation, tutorials and lots of example/sample code on the web (including here at Stack Overflow), and come up empty. Seems that everything requires a known target object to work, and I just don't have one. Since I may not be expressing my problem as clearly as needed, I've put up a web page, with diagram & working sample pages to illustrate: http://www.nulltime.com/svtest/index.html Any help or guidance will be appreciated! Thanks.

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

  • genStrAsCharArray optimisation benefits

    - by Rich
    Hi I am looking into the options available to me for optimising the performance of JBoss 5.1.0. One of the options I am looking at is setting genStrAsCharArray to true in <JBOSS_HOME>/server/<PROFILE>/deployers/jbossweb.deployer/web.xml. This affects the generation of .java code from .JSPs. The comment describes this flag as: Should text strings be generated as char arrays, to improve performance in some cases? I have a few questions about this. Is this the generation of Strings in the dynamic parts of the JSP page (ie each time the page is called) or is it the generation of Strings in the static parts (ie when the .java is built from the JSP)? "in some cases" - which cases are these? What are the situations where the performance is worse? Does this speed up the generation of the .java, the compilation of the .class or the execution of the .class? At a more technical level (and the answer to this will probably depend on the answer to part 1), why can the use of char arrays improve performance? Thanks in advance Rich

    Read the article

  • Creating many polygons with OpenGL is slow?

    - by user146780
    I want to draw many polygons to the screen but i'm quickly noticing that it slows down quickly. As a test I did this: for(int i = 0; i < 50; ++i) { glBegin( GL_POLYGON); glColor3f( 0.0f, 1, 0.0f ); glVertex2f( 500.0 + frameGL.GetCameraX(), 0.0f + frameGL.GetCameraY()); glColor3f( 0.0f, 1.0f, 0.0f ); glVertex2f( 900.0 + frameGL.GetCameraX(), 0.0f + frameGL.GetCameraY()); glColor3f( 0.0f, 0.0f, 0.5 ); glVertex2f(900.0 + frameGL.GetCameraX(), 500.0f + frameGL.GetCameraY() + (150)); glColor3f( 0.0f, 1.0f, 0.0f ); glVertex2f( 500 + frameGL.GetCameraX(), 500.0f + frameGL.GetCameraY()); glColor3f( 1.0f, 1.0f, 0.0f ); glVertex2f( 300 + frameGL.GetCameraX(), 200.0f + frameGL.GetCameraY()); glEnd(); } This is only 50 polygons and already it's gtting slow. I can't upload them directly to the card because my program will allow the user to reshape the verticies. My question is, how can I speed this up. I'm not using depth. I also know it's not my GetCamera() functions because if I create 500,000 polygons spread apart t's fine, it just has trouble showing them in the view. If a graphics card can support 500,000,000 on screen polygons per second, this should be easy right? Thanks

    Read the article

  • OpenGL fast texture drawing with vertex buffer objects. Is this the way to do it?

    - by Matthew Mitchell
    Hello. I am making a 2D game with OpenGL. I would like to speed up my texture drawing by using VBOs. Currently I am using the immediate mode. I am generating my own coordinates when I rotate and scale a texture. I also have the functionality of rounding the corners of a texture, using the polygon primitive to draw those. I was thinking, would it be fastest to make a VBO with vertices for the sides of the texture with no offset included so I can then use glViewport, glScale (Or glTranslate? What is the difference and most suitable here?) and glRotate to move the drawing position for my texture. Then I can use the same VBO with no changes to draw the texture each time. I could only change the VBO when I need to add coordinates for the rounded corners. Is that the best way to do this? What things should I look out for while doing it? Is it really fastest to use GL_TRIANGLES instead of GL_QUADS in modern graphics cards? Thank you for any answer.

    Read the article

  • How can I embed images within my application and use them in HTML control?

    - by Atara
    Is there any way I can embed the images within my exe (as resource?) and use it in generated HTML ? Here are the requirements: A. I want to show dynamic HTML content (e.g. using webBrowser control, VS 2008, VB .Net, winForm desktop application) B. I want to generate the HTML on-the-fly using XML and XSL (file1.xml or file2.xml transformed by my.xsl) C. The HTML may contain IMG tags (file1.gif and or file2.gif according to the xml+xsl transformation) and here comes the complicated one: D. All these files (file1.xml, file2.xml, my.xsl, file1.gif, file2.gif) should be embedded in one exe file. I guess the XML and XSL can be embedded resources, and I can read them as stream, but what ways do I have to reference the image within the HTML ? <IMG src="???" /> I do not want to use absolute path and external files. If the image files are resources, can I use relative path? Relative to what? (I can use BASE tag, and then what?) Can I use stream as in email messages? If so, where can I find the format I need to use? http://www.websiteoptimization.com/speed/tweak/inline-images/ are browser dependent. What is the browser used by webBrowser control? IE? what version? Does it matter if I use GIF or JPG or BMP (or any other image format) for the images? Does it matter if I use mshtml library and not the regular webBrowser control? (currently I use http://www.itwriting.com/htmleditor/index.php ) Does it matter if I upgrade to VS 2010 ? Thanks, Atara

    Read the article

  • Deleting While Iterating in Ruby?

    - by Jesse J
    I'm iterating over a very large set of strings, which iterates over a smaller set of strings. Due to the size, this method takes a while to do, so to speed it up, I'm trying to delete one of the strings from the smaller set that no longer needs to be used. Below is my current code: Ms::Fasta.foreach(@database) do |entry| all.each do |set| if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete(set) end end end end The problem I face is that the easy way, array.delete(string), effectively adds a break statement to the inner loop, which messes up the results. The only way I know how to fix this is to do this: Ms::Fasta.foreach(@database) do |entry| i = 0 while i < all.length set = all[i] if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete_at(i) i -= 1 end end i += 1 end end This feels kind of sloppy to me. Is there a better way to do this? Thanks.

    Read the article

  • Groovy htmlunit getFirstByXPath returning null

    - by StartingGroovy
    I have had a few issues with HtmlUnit returning nulls lately and am looking for guidance. each of my results for grabbing the first row of a website have returned null. I am wondering if someone can A) explain why they might be returning null B) explain better ways (if there are some) to go about getting the information Here is my current code (URL is in the source): client = new WebClient(BrowserVersion.FIREFOX_3) client.javaScriptEnabled = false def url = "http://www.hidemyass.com/proxy-list/" page = client.getPage(url) IpAddress = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[2]").getValue() println "IP Address is: $data" //returns null //Port_Number is an Image Country = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[4][@class='country']/@rel").getValue() println "Country abbreviation is: $Country" //differentiate speed and connection by name of gif? Type = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[7]").getValue() println "Proxy type is: $Type" Anonymity = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[8]").getValue() println "Anonymity Level is: $Anonymity" client.closeAllWindows() Right now all of my XPaths return null and .getValue() obviously doesn't work on null. I also have questions as to what I should do about the PORT since it is an image? Is there a better alternative than downloading it and attempting to solve it by OCR? Side Note There is no significance in this site, I was just looking for a site that I could practice scraping on (the last one I ran into issues of fragment identities and couldn't get an answer to: HtmlUnit getByXpath returns null and HtmlUnit and Fragment Identities )

    Read the article

  • A scripting engine for Ruby?

    - by Earlz
    Hello, I am creating a Ruby On Rails website, and for one part it needs to be dynamic so that (sorta) trusted users can make parts of the website work differently. For this, I need a scripting language. In a sort of similar project in ASP.Net, I wrote my own scripting language/DSL. I can not use that source code(written at work) though, and I don't want to make another scripting language if I don't have to. So, what choices do I have? The scripting must be locked down and not be able to crash my server or anything. I'd really like if I could use Ruby as the scripting language, but it's not strictly necessary. Also, this scripting part will be called on almost every request for the website, sometimes more than once. So, speed is a factor. I looked at the RubyLuaBridge but it is Alpha status and seems dead. What choices for a scripting language do I have in a Ruby project? Also, I will have full control over where this project is deployed(root access), so there are no real limits..

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >