Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 103/267 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Axis2 Webservice -> php

    - by Peter Hagström
    Hi! If I have understood Axis2 correct i can construct a WebService and then access it with any SOAP compatible client. I have a java class with a couple of methods that I have written in Eclipse, and then automatically constructed a service with the Axis2 plugin from WTP. This is the methods of my class. public int test(int i){ return i+2; } public Car CarTest(int speed){ return new Car("Biltest", speed); } public CarFactoryAdapter getCarFactory(){ carFact.getCars().add(new Car("Bmw", 250)); carFact.getCars().add(new Car("seat", 350)); carFact.getCars().add(new Car("saab", 150)); carFact.getCars().add(new Car("volv", 50)); return new CarFactoryAdapter(carFact); } The code seems to work when I try it with soapUI and the Axis2-web interface has recognized the methods of my service. But when Iam trying the methods that receives parameters with PHP´s built in soapClient i get a Unknown exception. The getCarFactory methods works at least as expected, but it seems kind of crippled if I can´t send parameters. Example of non working method invocation. ini_set('soap.wsdl_cache_ttl',0); $client = new SoapClient("http://192.168.128.162:8080/ComplexWebService/services/CarService?wsdl", array('soap_version' => SOAP_1_2, 'trace' => 1)); $ar['i'] = (int)100; print_r($client->__soapCall("test",$ar)); I need to make sure that the SOA framework i choose will be able to comunicate with many platforms, there will be clients in at least PHP and Java, but it would be good if it will work in for example .NET to.

    Read the article

  • I am trying to have a wall follow robot but there are errors on the names not being declared in my s

    - by Sam
    #include <iostream> #include <libplayerc++/playerc++.h> using namespace std; int main(int argc, char *argv[]) { using namespace PlayerCc; PlayerClient robot("localhost"); BumperProxy bp(&robot,0); Position2dProxy pp(&robot,0); pp.SetMotorEnable(true); for(;;) double turnrate, speed; double error; bool wall; motor_a_speed(0); motor_c_speed(0); while(1) { front_bumper = SENSOR_2; left_bumper = SENSOR_3; if (front_bumper > 2) { if (left_bumper < 3) { motor_a_speed(5); motor_c_speed(drive_speed); motor_a_dir(fwd); motor_c_dir(fwd); } else { motor_a_speed(drive_speed); motor_c_speed(5); motor_a_dir(rev); motor_c_dir(rev); } } else { motor_a_speed(drive_speed); motor_c_speed(drive_speed); motor_a_dir(brake); motor_c_dir(brake); mrest(100); cputs("bump"); motor_a_dir(fwd); motor_c_dir(rev); msleep(450); cputs("right"); motor_a_speed(10); motor_a_dir(fwd); motor_c_dir(fwd); mrest(1300); } pp.SetSpeed(speed, turnrate); } }

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • Are MEF's ComposableParts contracts instance-based?

    - by Dave
    I didn't really know how to phrase the title of my questions, so my apologies in advance. I read through parts of the MEF documentation to try to find the answer to my question, but couldn't find it. I'm using ImportMany to allow MEF to create multiple instances of a specific plugin. That plugin Imports several parts, and within calls to a specific instance, it wants these Imports to be singletons. However, what I don't want is for all instances of this plugin to use the same singleton. For example, let's say my application ImportManys Blender appliances. Every time I ask for one, I want a different Blender. However, each Blender Imports a ControlPanel. I want each Blender to have its own ControlPanel. To make things a little more interesting, each Blender can load BlendPrograms, which are also contained within their own assemblies, and MEF takes care of this loading. A BlendProgram might need to access the ControlPanel to get the speed, but I want to ensure that it is accessing the correct ControlPanel (i.e. the one that is associated with the Blender that is associated with the program!) This diagram might clear things up a little bit: As the note shows, I believe that the confusion could come from an inherently-poor design. The BlendProgram shouldn't touch the ControlPanel directly, and instead perhaps the BlendProgram should get the speed via the Blender, which will then delegate the request to its ControlPanel. If this is the case, then I assume the BlendProgram needs to have a reference to a specific Blender. In order to do this, is the right way to leverage MEF and use an ImportingConstructor for BlendProgram, i.e. [ImportingConstructor] public class BlendProgram : IBlendProgram { public BlendProgram( Blender blender) {} } And if this is the case, how do I know that MEF will use the intended Blender plugin?

    Read the article

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • Visual Studio 2010: very slow web applications debugging!

    - by micha12
    I recently installed Visual Studio 2010 (Ultimate edition, final version released in April), and found that debugging a web application became very slow (2-3 times slower than in Visual Studio 2008)! I took the same web application and checked the speed of loading of one of its pages in VS 2008 and VS 2010, and compared the time it takes to load the page. I tested it using 2 approaches: 1) debugging under ASP.NET Development Server (by pressing the "Start" button) and 2) using ASP.NET Development Server without debugging (by using the "View in Browser" menu command). And I got the following results for Visual Studio 2008 and 2010. 1) ASP.NET Development Server withoud debugging ("View in Browser"): the speed of page loading is the same in VS 2008 and 2010. 2) Debugging under ASP.NET Development Server ("Start" button): in VS 2010 the page takes more time to load than in VS 2008 - VS 2010 debugging is 2-3 times slower than in VS 2008! 3) At the same time, when debugging a web application in VS 2008, it takes the same time to load the page compared to when using only the "View in Browser" command. That is, VS 2008 debugging does not introduce any overhead to page loading in the web browser! I wanted to make sure that other people have the same problem with slow debugging of web applications in VS 2010. Can this issue be solved by any means? BTW, I am using Windows XP SP3. Thank you.

    Read the article

  • Box2dx: Cancel force on a body?

    - by Rosarch
    I'm doing pathfinding where I use force to push body to waypoints. However, once they get close enough to the waypoint, I want to cancel out the force. How can I do this? Do I need to maintain separately all the forces I've applied to the body in question? I'm using Box2dx (C#/XNA). Here is my attempt, but it doesn't work at all: internal PathProgressionStatus MoveAlongPath(PositionUpdater posUpdater) { Vector2 nextGoal = posUpdater.Goals.Peek(); Vector2 currPos = posUpdater.Model.Body.Position; float distanceToNextGoal = Vector2.Distance(currPos, nextGoal); bool isAtGoal = distanceToNextGoal < PROXIMITY_THRESHOLD; Vector2 forceToApply = new Vector2(); double angleToGoal = Math.Atan2(nextGoal.Y - currPos.Y, nextGoal.X - currPos.X); forceToApply.X = (float)Math.Cos(angleToGoal) * posUpdater.Speed; forceToApply.Y = (float)Math.Sin(angleToGoal) * posUpdater.Speed; float rotation = (float)(angleToGoal + Math.PI / 2); posUpdater.Model.Body.Rotation = rotation; if (!isAtGoal) { posUpdater.Model.Body.ApplyForce(forceToApply, posUpdater.Model.Body.Position); posUpdater.forcedTowardsGoal = true; } if (isAtGoal) { // how can the body be stopped? posUpdater.forcedTowardsGoal = false; //posUpdater.Model.Body.SetLinearVelocity(new Vector2(0, 0)); //posUpdater.Model.Body.ApplyForce(-forceToApply, posUpdater.Model.Body.GetPosition()); posUpdater.Goals.Dequeue(); if (posUpdater.Goals.Count == 0) { return PathProgressionStatus.COMPLETE; } } UPDATE If I do keep track of how much force I've applied, it fails to account for other forces that may act on it. I could use reflection and set _force to zero directly, but that feels dirty.

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • parsing xml using dom4j

    - by D3GAN
    My XML structure is like this: <rss> <channel> <yweather:location city="Paris" region="" country="France"/> <yweather:units temperature="C" distance="km" pressure="mb" speed="km/h"/> <yweather:wind chill="-1" direction="40" speed="11.27"/> <yweather:atmosphere humidity="87" visibility="9.99" pressure="1015.92" rising="0"/> <yweather:astronomy sunrise="8:30 am" sunset="4:54 pm"/> </channel> </rss> when I tried to parse it using dom4j SAXReader xmlReader = createXmlReader(); Document doc = null; doc = xmlReader.read( inputStream );//inputStream is input of function log.info(doc.valueOf("/rss/channel/yweather:location/@city")); private SAXReader createXmlReader() { Map<String,String> uris = new HashMap<String,String>(); uris.put( "yweather", "http://xml.weather.yahoo.com/ns/rss/1.0" ); uris.put( "geo", "http://www.w3.org/2003/01/geo/wgs84_pos#" ); DocumentFactory factory = new DocumentFactory(); factory.setXPathNamespaceURIs( uris ); SAXReader xmlReader = new SAXReader(); xmlReader.setDocumentFactory( factory ); return xmlReader; } But I got nothing in cmd but when I print doc.asXML(), my XML structure print correctly!

    Read the article

  • Is this a "valid" css image replacement technique?

    - by user278457
    I just came up with this, it seems to work in all modern browsers, I just tested it then on (IE8/compatibility, Chrome, Safari, Moz) HTML <img id="my_image" alt="my text" src="images/small_transparent.gif" /> CSS #my_image{ background-image:url('images/my_image.png'); width:100px; height:100px;} Pro's: image alt text is best-practice for accessibility/seo no extra HTML markup, and the css is pretty minimal too gets around the css on/images off issue where "text-indent" techniques hide text from low bandwidth users The biggest disadvantage that I can think of is the css off/images on situation, because you'll only send a transparent gif. I'd like to know, who uses images without stylesheets? some kind of mobile phone or something? I'm making some sites for clients in regional Australia (hundreds of km from the nearest city), where many users will be suffering from dial-up connections, and often outdated browsers too, so the "images off" issue is an important consideration. are there any other side effects with this technique that I haven't considered?

    Read the article

  • How accurately (in terms of time) does Windows play audio?

    - by MusiGenesis
    Let's say I play a stereo WAV file with 317,520,000 samples, which is theoretically 1 hour long. Assuming no interruptions of the playback, will the file finish playing in exactly one hour, or is there some occasional tiny variation in the playback speed such that it would be slightly more or slightly less (by some number of milliseconds) than one hour? I am trying to synchronize animation with audio, and I am using a System.Diagnostics.Stopwatch to keep the frames matching the audio. But if the playback speed of WAV audio in Windows can vary slightly over time, then the audio will drift out of sync with the Stopwatch-driven animation. Which leads to a second question: it appears that a Stopwatch - while highly granular and accurate for short durations - runs slightly fast. On my laptop, a Stopwatch run for exactly 24 hours (as measured by the computer's system time and a real stopwatch) shows an elapsed time of 24 hours plus about 5 seconds (not milliseconds). Is this a known problem with Stopwatch? (A related question would be "am I crazy?", but you can try it for yourself.) Given its usage as a diagnostics tool, I can see where a discrepancy like this would only show up when measuring long durations, for which most people would use something other than a Stopwatch. If I'm really lucky, then both Stopwatch and audio playback are driven by the same underlying mechanism, and thus will stay in sync with each other for days on end. Any chance this is true?

    Read the article

  • What is an elegant way to solve this max and min problem in Ruby or Python?

    - by ????
    The following can be done by step by step, somewhat clumsy way, but I wonder if there are elegant method to do it. There is a page: http://www.mariowiki.com/Mario_Kart_Wii, where there are 2 tables... there is Mario - 6 2 2 3 - - Luigi 2 6 - - - - - Diddy Kong - - 3 - 3 - 5 [...] The name "Mario", etc are the Mario Kart Wii character names. The numbers are for bonus points for: Speed Weight Acceleration Handling Drift Off-Road Mini-Turbo and then there is table 2 Standard Bike S 39 21 51 51 54 43 48 Out Bullet Bike 53 24 32 35 67 29 67 In Bubble Bike / Jet Bubble 48 27 40 40 45 35 37 In [...] These are also the characteristics for the Bike or Kart. I wonder what's the most elegant solution for finding all the maximum combinations of Speed, Weight, Acceleration, etc, and also for the minimum, either by directly using the HTML on that page or copy and pasting the numbers into a text file. Actually, in that character table, Mario to Bower Jr are all medium characters, Baby Mario to Dry Bones are small characters, and the rest are all big characters, except the small, medium, or large Mii are just as what the name says. Small characters can only ride small bike or small kart, and so forth for medium and large.

    Read the article

  • Does fast typing influence fast programming? [closed]

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

  • Is the REST support in Spring 3's MVC Framework production quality yet?

    - by glenjohnson
    Hi all, Since Spring 3 was released in December last year, I have been trying out the new REST features in the MVC framework for a small commercial project involving implementing a few RESTful Web Services which consume XML and return XML views using JiBX. I plan to use either Hibernate or JDBC Templates for the data persistence. As a Spring 2.0 developer, I have found Spring 3's (and 2.5's) new annotations way of doing things quite a paradigm shift and have personally found some of the new MVC annotation features difficult to get up to speed with for non-trivial applications - as such, I am often having to dig for information in forums and blogs that is not apparent from going through the reference guide or from the various Spring 3 REST examples on the web. For deadline-driven production quality and mission critical applications implementing a RESTful architecture, should I be holding off from Spring 3 and rather be using mature JSR 311 (JAX-RS) compliant frameworks like RESTlet or Jersey for the REST layer of my code (together with Spring 2 / 2.5 to tie things together)? I had no problems using RESTlet 1.x in a previous project and it was quite easy to get up to speed with (no magic tricks behind the scenes), but when starting my current project it initially looked like the new REST stuff in Spring 3's MVC Framework would make life easier. Do any of you out there have any advice to give on this? Does anyone know of any commercial / production-quality projects using, or having successfully delivered with, the new REST stuff in Spring 3's MVC Framework. Many thanks Glen

    Read the article

  • In Corona SDK the background image always cover other images

    - by user1446126
    I'm currently making a tower defense game with Corona SDK. However, while I'm making the gaming scene, The background scene always cover the monster spawn, I've tried background:toBack() ,however it's doesn't work.Here is my code: module(..., package.seeall) function new() local localGroup = display.newGroup(); local level=require(data.levelSelected); local currentDes = 1; monsters_list = display.newGroup() --The background local bg = display.newImage ("image/levels/1/bg.png"); bg.x = _W/2;bg.y = _H/2; bg:toBack(); --generate the monsters function spawn_monster(kind) local monster=require("monsters."..kind); newMonster=monster.new() --read the spawn(starting point) in level, and spawn the monster there newMonster.x=level.route[1][1];newMonster.y=level.route[1][2]; monsters_list:insert(newMonster); localGroup:insert(monsters_list); return monsters_list; end function move(monster,x,y) -- Using pythagoras to calauate the moving distace, Hence calauate the time consumed according to speed transition.to(monster,{time=math.sqrt(math.abs(monster.x-x)^2+math.abs(monster.y-y)^2)/(monster.speed/30),x=x, y=y, onComplete=newDes}) end function newDes() currentDes=currentDes+1; end --moake monster move according to the route function move_monster() for i=1,monsters_list.numChildren do move(monsters_list[i],200,200); print (currentDes); end end function agent() spawn_monster("basic"); end --Excute function above. timer2 = timer.performWithDelay(1000,agent,10); timer.performWithDelay(100,move_monster,-1); timer.performWithDelay(10,update,-1); move_monster(); return localGroup; end and the monster just stuck at the spawn point and stay there. but, When i comment these 3 lines of code: --local bg = display.newImage ("image/levels/1/bg.png"); --bg.x = _W/2;bg.y = _H/2; --bg:toBack(); The problem disappear Any ideas??Thanks for helping

    Read the article

  • implementing gravity to projectile - delta time issue

    - by Murat Nafiz
    I'm trying to implement a simple projectile motion in Android (with openGL). And I want to add gravity to my world to simulate a ball's dropping realistically. I simply update my renderer with a delta time which is calculated by: float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); In my screen.update(deltaTime) method: if (isballMoving) { golfBall.updateLocationAndVelocity(deltaTime); } And in golfBall.updateLocationAndVelocity(deltaTime) method: public final static double G = -9.81; double vz0 = getVZ0(); // Gets initial velocity(z) double z0 = getZ0(); // Gets initial height double time = getS(); // gets total time from act begin double vz = vz0 + G * deltaTime; // calculate new velocity(z) double z = z0 - vz0 * deltaTime- 0.5 * G * deltaTime* deltaTime; // calculate new position time = time + deltaTime; // Update time setS(time); //set new total time Now here is the problem; If I set deltaTime as 0.07 statically, then the animation runs normally. But since the update() method runs as faster as it can, the length and therefore the speed of the ball varies from device to device. If I don't touch deltaTime and run the program (deltaTime's are between 0.01 - 0.02 with my test devices) animation length and the speed of ball are same at different devices. But the animation is so SLOW! What am I doing wrong?

    Read the article

  • Simplification / optimization of GPS track

    - by GreyCat
    I've got a GPS track, produces by gpxlogger(1) (supplied as a client for gpsd). GPS receiver updates its coordinates every 1 second, gpxlogger's logic is very simple, it writes down location (lat, lon, ele) and a timestamp (time) received from GPS every n seconds (n = 3 in my case). After writing down a several hours worth of track, gpxlogger saves several megabyte long GPX file that includes several thousands of points. Afterwards, I try to plot this track on a map and use it with OpenLayers. It works, but several thousands of points make using the map a sloppy and slow experience. I understand that having several thousands of points of suboptimal. There are myriads of points that can be deleted without losing almost anything: when there are several points making up roughly the straight line and we're moving with the same constant speed between them, we can just leave the first and the last point and throw anything else. I thought of using gpsbabel for such track simplification / optimization job, but, alas, it's simplification filter works only with routes, i.e. analyzing only geometrical shape of path, without timestamps (i.e. not checking that the speed was roughly constant). Is there some ready-made utility / library / algorithm available to optimize tracks? Or may be I'm missing some clever option with gpsbabel?

    Read the article

  • Is learning C++ a good idea?

    - by chang
    The more I hear and read about C++ (e.g. this: http://lwn.net/Articles/249460/), I get the impression, that I'd waste my time learning C++. I some wrote network routing algorithm in C++ for a simulator, and it was a pain (as expected, especially coming from a perl/python/Java background ...). I'm never happy about giving up on some technology, but I would be happy, if I could limit my knowledge of C-family languages to just C, C# and Objective-C (even OS Xs Cocoa, which is huge and takes a lot of time to learn looks like joy compared to C++ ...). Do I need to consider myself dumb or unwilling, just because I'm not partial to the pain involved learning this stuff? Technologies advance and there will be options other than C++, when deciding on implementation languages, or not? And for speed: If speed were that critical, I'd go for a plain C implementation instead, or write C extensions for much more productive languages like ruby or python ... The one-line version of the above: Will C++ stay such a relevant language that every committed programmer should be familiar with it? [ edit / thank you very much for your interesting and useful answers so far .. ] [ edit / .. i am accepting the top-rated answer; thanks again for all answers! ]

    Read the article

  • What is the fastest way to do division in C for 8bit MCUs?

    - by Jordan S
    I am working on the firmware for a device that uses an 8bit mcu (8051 architecture). I am using SDCC (Small Device C Compiler). I have a function that I use to set the speed of a stepper motor that my circuit is driving. The speed is set by loading a desired value into the reload register for a timer. I have a variable, MotorSpeed that is in the range of 0 to 1200 which represents pulses per second to the motor. My function to convert MotorSpeed to the correct 16bit reload value is shown below. I know that float point operations are pretty slow and I am wondering if there is a faster way of doing this... void SetSpeed() { float t = MotorSpeed; unsigned int j = 0; t = 1/t ; t = t / 0.000001; j = MaxInt - t; TMR3RL = j; // Set reload register for desired freq return; }

    Read the article

  • How to receive the text which was sent by SendText

    - by thillai-selvan
    In asterisk I have sent a text message using SendText as follows I have two registered users in sip.conf file. sip.conf details [thillai] username=thillai secret=thillai host=dynamic type=friend allow=all context=test [selvan] username=selvan secret=selvan allow=all host=dynamic type=friend context=test Then I have created the necessary extensions like this extensions.conf file: [test] exten = 677,1,BackGround(thankyou) exten = 677,n,Dial(SIP/thillai) exten = 677,n,SendText('this is for testing') So when a caller is trying to call to extension 677 this text information will be sent. My question is how can I receive this text in the caller side? Any help will be much appreciated.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >