Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 174/2465 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Why is my ruby application running faster the second time?

    - by Omega
    I'm creating a Ruby game using the Gosu framework. All good. Sometimes, when I run the game, it has some kind of slow startup, and probably it will be rather slow during the whole game. So I close it and... open it again. It is very likely that it will startup quickly and the whole game will run smoothly and fast. Why is that? What is this phenomenon? Is it faster because of some cache stored or whatever since the first run? (But why would cache be stored? If the app dies, I would expect no references at all etc...) Ruby, Windows 7.

    Read the article

  • Get expiration time of a memcache item in php?

    - by Jonatan Littke
    Hey. I'm caching tweets on my site (with 30 min expiration time). When the cache is empty, the first user to find out will repopulate it. However, at that time the Twitter API may return a 200. In that case I'd like to prolong the previous data for another 30 mins. But the previous data will already be lost. So instead I'd like to look into repopulating the cache, say, 5 minutes before expiration time so that I don't lose any date. So how do I know the expiration time of an item when using php's memcache::get()? Also, is there a better way of doing this?

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Using a general class for execution with try/catch/finally?

    - by antirysm
    I find myself having a lot of this in different methods in my code: try { runABunchOfMethods(); } catch (Exception ex) { logger.Log(ex); } What about creating this: public static class Executor { private static ILogger logger; public delegate void ExecuteThis(); static Executor() { // logger = ...GetLoggerFromIoC(); } public static void Execute(ExecuteThis executeThis) { try { executeThis(); } catch (Exception ex) { logger.Log(ex); } } } And just using it like this: private void RunSomething() { Method1(someClassVar); Method2(someOtherClassVar); } ... Executor.Execute(RunSomething); Are there any downsides to this approach? (You could add Executor-methods and delegates when you want a finally and use generics for the type of Exeception you want to catch...)

    Read the article

  • Can you be a programmer and Business manager at the same time?

    - by the_knight5000
    Hello all, I think I'm struggled in some situation! We are a new start-up with 5 employees (2 Programmers). I'm the Technical Manager and that was so fine! Now I can see the fingers point to me to take the control of everything, as I've the big vision of what our organization do and play the role of CEO or General Manager! I want to, but I've no idea if it would be risky to our organization to make such a decision? How would managerial interrupts affect the technical productivity? Any tips or previous experience about such situation would help :) Thanks in advance!

    Read the article

  • Is now the right time to move to .NET 4?

    - by bconlon
    The reason I pose this question is that I'm looking at WPF development and so using the latest version seems sensible. However, this means rolling out the .NET 4 runtime to PCs on old versions of the framework. Windows XP is still the number one O/S (estimated 40%+ market share). To run .NET 4 on XP requires Service Pack 3, and although it is good practice to move to the latest service packs, often large companies are slow to keep up due to the extensive testing involved. In fact, .NET 4 is not installed as standard with any Windows O/S as yet - Windows 7 and 2008 Server R2 have 3.5 installed. This is not quite as big an issue as it was for .NET 3.5 as .NET 4 is significantly smaller as it doesn't include the older runtimes - .NET 3.5 SP1 included .NET 3 and .NET 2 and was 250MB, although this was reduced by doing a web install. The size is also reduced a bit if you target the .NET 4 Client Profile, which should be OK for many WPF applications, and I think this may be rolled out as part of Windows service packs soon. But still, if your application is only 4-5 MB and you need 40-50 MB of Framework it is worth consideration before jumping in and using the new shiny features. #

    Read the article

  • how to monitor the program code execution? (file creation and modification by code lines etc)

    - by infant programmer
    My program is about triggering XSL transformation, Its fact that this code for carrying out the transformation, creates some dll and tmp files and deletes them pretty soon after the transformation is completed. It is almost untraceable for me to monitor the creation and deletion of files manually, so I want to include some chunk of codelines to display "which codeline has created/modified which tmp and dll files" in console window. This is the relevant part of the code: string strXmlQueryTransformPath = @"input.xsl"; string strXmlOutput = string.Empty; StringReader srXmlInput = null; StringWriter swXmlOutput = null; XslCompiledTransform xslTransform = null; XPathDocument xpathXmlOrig = null; XsltSettings xslSettings = null; MemoryStream objMemoryStream = null; objMemoryStream = new MemoryStream(); xslTransform = new XslCompiledTransform(false); xpathXmlOrig = new XPathDocument("input.xml"); xslSettings = new XsltSettings(); xslSettings.EnableScript = true; xslTransform.Load(strXmlQueryTransformPath, xslSettings, new XmlUrlResolver()); xslTransform.Transform(xpathXmlOrig, null, objMemoryStream); objMemoryStream.Position = 0; StreamReader objStreamReader = new StreamReader(objMemoryStream); strXmlOutput = objStreamReader.ReadToEnd(); // make use of Data in string "strXmlOutput" google and msdn search couldn't help me much..

    Read the article

  • Advice: should I focus on PHP + MySQL, or split my time for more JS and CSS? [closed]

    - by fakaff
    I started learning web development about three months ago (in between working my regular job), and I'm finally starting to get some vague, distant notion of understanding. I find the server-side stuff the most interesting; though I've not gone anywhere near Apache quite yet, which I assume will be necessary at some point. As cool as toying around with visuals and UI is, programming and database stuff inspires me with new ideas and possibilities every minute (I've even bought, on a whim, a wonderfully dry bunch of books on database theory and relational algebra). And whatever CSS or Javascript tutorial I'm doing, it often feels like a distraction from the PHP/MySQL stuff I'd rather be playing with. For someone like me who's just starting out, which is the most advisable course of action? (in terms of being marketable as a programmer): To focus on PHP and SQL stuff exclusively, and only once I master those to diversify my skills. To first learn all three (PHP/MySQL, Javascript, CSS and design) and only once I'm fluent in all three focus on PHP and databases?

    Read the article

  • In a web farm environment, should we base the system date/time to the web servers or the database se

    - by leypascua
    Assuming there are a number of load-balanced web servers in a web farm, is it safe to use DateTime.Now in the application code for getting the system date/time or should we leave this responsibility to the database server? Would there be a chance that machine date/time settings on all servers in the webfarm are out of sync? If date/time will be the responsible of the DBMS, how will this strategy work if we have load-balanced replicated DBs?

    Read the article

  • How to refresh jQuery Selector Value after an execution?

    - by Devyn
    Hi, I have like this. $(document).ready(function() { var $clickable_pieces = $('.chess_piece').parent(); $($clickable_pieces).addClass('selectee'); // add selectee class var $selectee = $('.chess_square.selectee'); // wait for click $($selectee).bind('click',function(){ $('.chess_square.selected').removeClass('selected'); $(this).addClass('selected'); { ........... } }); I initially inject 'selectee' class to all div which has 'chess_piece' class then I select DIVs with that class $('.chess_square.selectee'). <div id="clickable"> <div id="div1" class="chess_square"> </div> <div id="div2" class="chess_square selectee"> <div id="sub1" class="chess_piece queen"></div> </div> <div id="div3" class="chess_square"> </div> </div> There are two type of DIV element with class named 'chess_square selectee' and 'chess_square' which doesn't meant to be clickable. I move around Sub DIV of 'rps_square selectee' from DIV2 to DIV1 and add and remove classes to be exactly same like this. The meaning is Queen Piece is moved from Div2 to Div1. <div id="div1" class="chess_square selectee"> <div id="sub1" class="chess_piece queen"></div> </div> <div id="div2" class="chess_square"> </div> <div id="div3" class="chess_square"> </div> However, the problem is jQuery doesn't update var $selectee = $('.rps_square.selectee');. Even though I changed class names, DIV1 is not clickable and DIV2 is still clickable. By the way, I've used jQuery UI selectable but doesn't refresh either.

    Read the article

  • Are there any garanties in JLS about order of execution static initialization blocks?

    - by Roman
    I wonder if it's reliable to use a construction like: private static final Map<String, String> engMessages; private static final Map<String, String> rusMessages; static { engMessages = new HashMap<String, String> () {{ put ("msgname", "value"); }}; rusMessages = new HashMap<String, String> () {{ put ("msgname", "????????"); }}; } private static Map<String, String> msgSource; static { msgSource = engMessages; } public static String msg (String msgName) { return msgSource.get (msgName); } Is there a possibility that I'll get NullPointerException because msgSource initialization block will be executed before the block which initializes engMessages? (about why don't I do msgSource initialization at the end of upper init. block: just the matter of taste; I'll do so if the described construction is unreliable)

    Read the article

  • Do hiring managers have a hard time accepting developers who have a "business look alike" personal app but are NOT entrepreneurs?

    - by shadesco
    Directly post graduation from University, I decided to build my own web app (Ease My Day) while waiting to get a job as a software Engineer. The reasons to build this app: Gain solid hands on software experience before hitting the job scene Providing a solution to a common problem Not sitting doing nothing while searching for jobs The app is Not an entrepreneurial tryout nor a business to be sold. Still throughout interviews I noticed that at the rate of 4 of each 5 interviews I pass through the app is being confused with a business and I am asked the same questions: Why did you build the business? Why do you want to stop the app? Do you want to sell the app? Knowing that I didn't build a business nor make any income from this application. Do candidates who take initiatives and like to craft their own apps on the side cause a red flag on the hiring manager's radar?

    Read the article

  • Is it better to cut and store all sprites needed from a spritesheet in memory, or cut them out just-in-time?

    - by xLite
    I'm not sure what's best practice here as I have little experience with this. Essentially what I am asking is... if it's better to get your single PNG with all your different sprites on it for use in-game, cut out every sprite on startup and store them in memory, then access the already-cut-out sprite from memory quickly or Only have the single PNG with all the different sprites residing in memory, and when you need, for example, a tree. You cut out the tree from the PNG and then continue to use it as normal. I imagine the former is more CPU friendly than the latter but less memory friendly, vice versa for the latter. I want to know what the norm is for game dev. This is a pixel based game using 2D art. Each PNG is actually an avatar's sprite sheet with each body part separated and then later joined to form the full body of the avatar.

    Read the article

  • SQL Server 2012 Service Pack 1 is available - this time for sure!

    - by AaronBertrand
    Last week I mentioned in passing that Service Pack 1 is now available, while I was blogging from the PASS Summit keynote . I wanted to put up an official post instead of having it appear as a footnote there (I also updated my April Fools' joke to point to the right place). Service Pack 1 Details Service Pack 1 is build # 11.0.3000 and includes 13 fixes to public KB items and 35 other internal (VSTS) items. You can see the list of fixes in KB #2674319 . You can also read about new features included...(read more)

    Read the article

  • Why do I have to add a PPA twice (once to add it to the list of repo, second time to fix a BAD GPG)

    - by Luis Alvarado
    I notice the following: I add a ppa using add-apt-repository, for example the wine ppa, mozilla security, nvidia drivers, etc.. When I go to the Update Manager and tell it to CHECK for updates it throws me a PPA error. To solve the error I add the same PPA again. Why do I have to add the PPA again (This also can be done by adding the received key alone with apt-key) but why does this problem happen anyway.

    Read the article

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

  • Do you know of a C macro to compute Unix time and date?

    - by Alexis Wilke
    I'm wondering if someone knows/has a C macro to compute a static Unix time from a hard coded date and time as in: time_t t = UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13); I'm looking into that because I want to have a numeric static timestamp. This will be done hundred of times throughout the software, each time with a different date, and I want to make sure it is fast because it will run hundreds of times every second. Converting dates that many times would definitively slow down things (i.e. calling mktime() is slower than having a static number compiled in place, right?) [made an update to try to render this paragraph clearer, Nov 23, 2012] Update I want to clarify the question with more information about the process being used. As my server receives requests, for each request, it starts a new process. That process is constantly updated with new plugins and quite often such updates require a database update. Those must be run only once. To know whether an update is necessary, I want to use a Unix date (which is better than using a counter because a counter is much more likely to break once in a while.) The plugins will thus receive an update signal and have their on_update() function called. There I want to do something like this: void some_plugin::on_update(time_t last_update) { if(last_update < UNIX_TIMESTAMP(2010, 3, 22, 20, 9, 26)) { ...run update... } if(last_update < UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13)) { ...run update... } // as many test as required... } As you can see, if I have to compute the unix timestamp each time, this could represent thousands of calls per process and if you receive 100 hits a second x 1000 calls, you wasted 100,000 calls when you could have had the compiler compute those numbers once at compile time. Putting the value in a static variable is of no interest because this code will run once per process run. Note that the last_update variable changes depending on the website being hit (it comes from the database.) Code Okay, I got the code now: // helper (Days in February) #define _SNAP_UNIX_TIMESTAMP_FDAY(year) \ (((year) % 400) == 0 ? 29LL : \ (((year) % 100) == 0 ? 28LL : \ (((year) % 4) == 0 ? 29LL : \ 28LL))) // helper (Days in the year) #define _SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) \ ( \ /* January */ static_cast<qint64>(day) \ /* February */ + ((month) >= 2 ? 31LL : 0LL) \ /* March */ + ((month) >= 3 ? _SNAP_UNIX_TIMESTAMP_FDAY(year) : 0LL) \ /* April */ + ((month) >= 4 ? 31LL : 0LL) \ /* May */ + ((month) >= 5 ? 30LL : 0LL) \ /* June */ + ((month) >= 6 ? 31LL : 0LL) \ /* July */ + ((month) >= 7 ? 30LL : 0LL) \ /* August */ + ((month) >= 8 ? 31LL : 0LL) \ /* September */+ ((month) >= 9 ? 31LL : 0LL) \ /* October */ + ((month) >= 10 ? 30LL : 0LL) \ /* November */ + ((month) >= 11 ? 31LL : 0LL) \ /* December */ + ((month) >= 12 ? 30LL : 0LL) \ ) #define SNAP_UNIX_TIMESTAMP(year, month, day, hour, minute, second) \ ( /* time */ static_cast<qint64>(second) \ + static_cast<qint64>(minute) * 60LL \ + static_cast<qint64>(hour) * 3600LL \ + /* year day (month + day) */ (_SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) - 1) * 86400LL \ + /* year */ (static_cast<qint64>(year) - 1970LL) * 31536000LL \ + ((static_cast<qint64>(year) - 1969LL) / 4LL) * 86400LL \ - ((static_cast<qint64>(year) - 1901LL) / 100LL) * 86400LL \ + ((static_cast<qint64>(year) - 1601LL) / 400LL) * 86400LL ) WARNING: Do not use these macros to dynamically compute a date. It is SLOWER than mktime(). This being said, if you have a hard coded date, then the compiler will compute the time_t value at compile time. Slower to compile, but faster to execute over and over again.

    Read the article

  • Is it possible to come over the time out issue for a function call in C#?

    - by infant programmer
    In my program I call a method xslTransform.Load(strXmlQueryTransformPath, xslSettings, new XmlUrlResolver()); The problem I am facing is: sometimes this function doesn't execute well within the time. Sometimes compiler raises the time out issue after a long time of trial.. which inturn causes this part of application to shut. That is what I want to avoid. So if it exceeds certain time say 10 seconds I need to recall the method. Is it possible to add some code lines adjacent to this, which can meet the requirement?

    Read the article

  • Initialize array in amortized constant time -- what is this trick called?

    - by user946850
    There is this data structure that trades performance of array access against the need to iterate over it when clearing it. You keep a generation counter with each entry, and also a global generation counter. The "clear" operation increases the generation counter. On each access, you compare local vs. global generation counters; if they differ, the value is treated as "clean". This has come up in this answer on Stack Overflow recently, but I don't remember if this trick has an official name. Does it? One use case is Dijkstra's algorithm if only a tiny subset of the nodes has to be relaxed, and if this has to be done repeatedly.

    Read the article

  • How do I generate a random time interval and add it to a mysql datetime using php?

    - by KeenLearner
    I have many rows in mysql table with datetime's in the format of: 2008-12-08 04:16:51 etc I'd like to generate a random time interval of anywhere between 30 seconds, and 3 days and add them to the time above. a) how do I generate a random time between 30 and 3 days? b) how do I add this time to the date time format above? I imagine i need to do a loop to pull out all the info, do the math in php, and then update the row... Any ideas?

    Read the article

  • Can GJK be used with the same "direction finding method" every time?

    - by the_Seppi
    In my deliberations on GJK (after watching http://mollyrocket.com/849) I came up with the idea that it ins not neccessary to use different methods for getting the new direction in the doSimplex function. E.g. if the point A is closest to the origin, the video author uses the negative position vector AO as the direction in which the next point is searched. If an edge (with A as an endpoint) is closest, he creates a normal vector to this edge, lying in the plane the edge and AO form. If a face is the feature closest to the origin, he uses even another method (which I can't recite from memory right now) However, while thinking about the implementation of GJK in my current came, I noticed that the negative direction vector of the newest simplex point would always make a good direction vector. Of course, the next vertex found by the support function could form a simplex that less likely encases the origin, but I assume it would still work. Since I'm currently experiencing problems with my (yet unfinished) implementation, I wanted to ask whether this method of forming the direction vector is usable or not.

    Read the article

  • What's the current wait time for reconsideration requests for Google's webmaster tools?

    - by chrism2671
    We recently received an unnatural links penalty to our site; a rogue SEO firm did us some serious damage, and we lost 40% of our traffic (hundreds of thousands of users) overnight. The effect on our business has been severe and we're really hoping we making things right. We submitted a reconsideration request but I'm wondering how long I should forecast for an outcome, as it will have a knock on effect for our business.

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >