Search Results

Search found 1631 results on 66 pages for 'optimize'.

Page 42/66 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • UIWebView loads a local file for a long time

    - by Knodel
    I have a UIWebView which loads .rtfd.zip files like this: -(void)viewWillAppear:(BOOL)animated { self.title=@"Title"; if(rowPosledovatelnosti==0) { [self loadFile:@"PosledovatelnostiObzie.rtfd.zip"]; } if(rowPosledovatelnosti==1) { [self loadFile:@"Arifmeticheskaya.rtfd.zip"]; } if(rowPosledovatelnosti==2) { [self loadFile:@"Svojstva_Arifmeticheskoj.rtfd.zip"]; } if(rowPosledovatelnosti==3) { [self loadFile:@"Geometricheskaya.rtfd.zip"]; } if(rowPosledovatelnosti==4) { [self loadFile:@"Predel_Posledovatelnosti.rtfd.zip"]; } if(rowPosledovatelnosti==5) { [self loadFile:@"Summa_Geometricheskoy.rtfd.zip"]; } } But on the device it takes some time for the UIWebView to load the content. Is there any way to optimize the loading time?

    Read the article

  • Android: 2D. OpenGl or android.graphics.drawable?

    - by DroidIn.net
    I'm working with my friend on our first Android game. Basic idea is that every frame the whole surface is redrawn (1 large bitmap) which then sprinkled all over with large number of particles which produces effect of soapy bubbles where there's a pool of about 20 bitmaps which randomly gets picked to produce illusion that all bubbles (between 200 - 300) are all different. The math engine is in C (JNI) and currently all drawing is done using android.graphics package very similar (since that was the example I was using) to Lunar Lander. It works but animation is somewhat jerky and I can feel by temperature of my phone that it is very busy. Will we benefit from switching to OpenGL? And as a bonus question: what would be a good way to optimize the drawing mechanism (Lunar Lander like) we have now?

    Read the article

  • does mysql stored procedures support data types like array, hash,etc?

    - by Yang
    i am creating a mysql function which takes in a varchar and change it to a int based on a mapping table. select name, convert_to_code(country) from users where sex = 'male' here, the convert_to_code() function takes in a country name (e.g. Japan, Finland..) and change it to country code which is an integer (e.g. 1001, 2310..) based on a mapping table called country_maping like below: country_name (varchar) | country_code (int) Japan | 1001 Finland | 2310 Canada | 8756 currently, the stored function need to select country_code from country_mapping where country_name = country_name and return the query result. is that possible to create a hash data struct in SP to optimize the process so that it will not need to perform the query for each row matches the where clause. thanks in advance!

    Read the article

  • Simple neon optimization in Android

    - by Peter
    In my Android application, I use a bunch of open source libraries such as libyuv, libvpx, libcrypto, libssl, etc. Some of them come with Android.mk. For others, I hand-crafted Android.mk. The code is built only for arm for now. Here is my Application.mk: APP_ABI := armeabi-v7a APP_OPTIM := release APP_STL := gnustl_static APP_CPPFLAGS := -frtti I am looking for way to generate binaries that are optimized for neon. Browsing the net, I found the following setting that someone is using in his Android.mk: LOCAL_CFLAGS += -mfloat-abi=softfp -mfpu=neon -march=armv7 I wonder if I simply put this setting in Application.mk, will it automatically get applied across all the libraries? A step before each library is built is the following: include $(CLEAR_VARS) Is it better to include LOCAL_CFLAGS directive after this line (instead of including it in Application.mk)? Finally, why doesn't ndk-build automatically optimize for neon when it sees armabi in Application.mk? Regards.

    Read the article

  • Does MVC replace traditional manually created UI, BLL, DAL ?

    - by used2could
    I'm use to creating the UI, BLL, DAL by hand (some times i've used LINQ-SQL or SubSonic for the DAL). I've done several small projects using MVC since it's release. On these projects i've still continued to write a BLL and DAL by hand and then incorporate those into the MVC's models/controllers. Looking to optimize my time on projects this seems like over kill and a potential waste of time. My question is: Would it be acceptable to roll a DAL such as SubSonic and directly use it in the Models/Controllers of my MVC web app? Now the models & controllers would act as the BLL. I just see this as a major time savor to not have to worry about another tier. (Agree ? "Please state way" : "Make argument")

    Read the article

  • "Anagram solver" based on statistics rather than a dictionary/table?

    - by James M.
    My problem is conceptually similar to solving anagrams, except I can't just use a dictionary lookup. I am trying to find plausible words rather than real words. I have created an N-gram model (for now, N=2) based on the letters in a bunch of text. Now, given a random sequence of letters, I would like to permute them into the most likely sequence according to the transition probabilities. I thought I would need the Viterbi algorithm when I started this, but as I look deeper, the Viterbi algorithm optimizes a sequence of hidden random variables based on the observed output. I am trying to optimize the output sequence. Is there a well-known algorithm for this that I can read about? Or am I on the right track with Viterbi and I'm just not seeing how to apply it?

    Read the article

  • Premature optimization is the root of all evil, but can it ever be too late?

    - by polygenelubricants
    "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" So what is that 3% like? Can the avoidance of premature optimization ever be taken too extreme that it does more harm than good? Even if it's rare, has there been a case of a real measurable software engineering disaster due to complete negligence to optimize early in the process? Bonus question: is software engineering pretty much the only field that has such a counter intuitive principle regarding doing something earlier rather than later before things potentially become too big a problem to fix? Personal question: how do you justify something as premature optimization and not just a case of you being lazy/ignorant/dumb?

    Read the article

  • Apache Prefork Configuration

    - by user1618606
    I'm newbie on VPS configuration. So, I've installed apache, php and mysql and now I need to know how to configure Prefork to optimize Apache. The system configuration is: CPU Cores 2 x 2 Ghz @ 4 Ghz RAM Memory 2304 MB DDR3 Burst Memory 3 GB DDR3 Disk Space 30 GB SSD Bandwidth 3 TB SwitchPort 1 Gbps Actually, after linux, mysql, apache and php, there are 250 MB memory in use. Well, I don't have idea to calculate. I saw in some websistes, some vars like: KeepAlive On KeepAliveTimeout 1 MaxKeepAliveRequests 100 StartServers 15 MinSpareServers 15 MaxSpareServers 15 MaxClients 20 MaxRequestsPerChild 0 or StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 How I could to do: Prefork or worker? Where and how the vars are placed? In httpd.conf? Big hug, Claudio.

    Read the article

  • How do I maximize a function in Mathematica which takes in a variable length list as input?

    - by Mike
    I have a time-series like function which gives an output based on a list of real numbers(i.e. you can run the function on a list of 1,2,3,... real numbers). The issue I'm encountering is how to maximize the function for a given list length(subject to a set of constraints). I could fix the function to a fixed number of real numbers(i.e. f [x_Real, y_Real] instead of f[x_List]) and treat x and y as the first two elements of the list, and call Maximize on the function, but this is not really elegant. I want to be able to easily change the number of elements in the list. What is the best way to optimize a function like I described, which takes a list as an argument, for a fixed list length?

    Read the article

  • Faster way to clone.

    - by AngryHacker
    I am trying to optimize a piece of code that clones an object: #region ICloneable public object Clone() { MemoryStream buffer = new MemoryStream(); BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(buffer, this); // takes 3.2 seconds buffer.Position = 0; return formatter.Deserialize(buffer); // takes 2.1 seconds } #endregion Pretty standard stuff. The problem is that the object is pretty beefy and it takes 5.4 seconds (according ANTS Profiler - I am sure there is the profiler overhead, but still). Is there a better and faster way to clone?

    Read the article

  • MongoDB vs CouchDB (Speed optimization)

    - by Edward83
    Hi! I made some tests of speed to compare MongoDB and CouchDB. Only inserts were while testing. I got MongoDB 15x faster than CouchDB. I know that it is because of sockets vs http. But, it is very interesting for me how can I optimize inserts in CouchDB? Test platform: Windows XP SP3 32 bit. I used last versions of MongoDB, MongoDB C# Driver and last version of installation package of CouchDB for Windows. Thanks!

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • SQL query duration is longer for smaller dataset?

    - by entens
    I received reports that a my report generating application was not working. After my initial investigation, I found that the SQL transaction was timing out. I'm mystified as to why the query for a smaller selection of items would take so much longer to return results. Quick query (averages 4 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-17-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC Long query (averages 1 minute 20 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-18-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC I could simply increase the timeout on the SqlCommand, but it doesn't change the fact the query is taking longer than it should. Why would requesting a subset of the items take longer than the query that returns more data? How can I optimize this query?

    Read the article

  • Improve performance of sorting files by extension

    - by DxCK
    With a given array of file names, the most simpliest way to sort it by file extension is like this: Array.Sort(fileNames, (x, y) => Path.GetExtension(x).CompareTo(Path.GetExtension(y))); The problem is that on very long list (~800k) it takes very long to sort, while sorting by the whole file name is faster for a couple of seconds! Theoretical, there is a way to optimize it: instead of using Path.GetExtension() and compare the newly created extension-only-strings, we can provide a Comparison than compares starting from the LastIndexOf('.') without creating new strings. Now, suppose i found the LastIndexOf('.'), i want to reuse native .NET's StringComparer and apply it only to the part on string after the LastIndexOf('.'). Didn't found a way to do that. Any ideas?

    Read the article

  • jQuery Optimizations

    - by aepheus
    I've just come to the end of a large development project. We were on a tight timeline, so a lot of optimization was "deferred". Now that we met our deadline, we're going back and trying to optimize things. My questions is this: What are some of the most important things you look for when optimizing jQuery web sites. Alternately I'd love to hear of sites/lists that have particularly good advise for optimizing jQuery. I've already read a few articles, http://www.tvidesign.co.uk/blog/improve-your-jquery-25-excellent-tips.aspx was an especially good read.

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Is it possible to shrink rt.jar with ProGuard?

    - by PatlaDJ
    Is there a procedure by which you can optimize/shrink/select/obfuscate only 'used by your app' classes/methods/fields from rt.jar provided by Sun by using some optimization software like ProGuard (or maybe other?). Then you would actually be able to minimize the download size of your application considerably and make it much more secure ? Right? Related questions: Do you know if Sun's "jigsaw project" which is waited to come out, is intended to automatically handle this particular issue? Did somebody manage yet to form an opinion about Avian java alternative? Please share it here. Thank you.

    Read the article

  • Rails/mysql SUM distinct records - optimization

    - by pepernik
    Hey. How would you optimize this SQL SELECT SUM(tmp.cost) FROM ( SELECT DISTINCT clients.id as client, countries.credits_cost AS cost FROM countries INNER JOIN clients ON clients.country_id = countries.id INNER JOIN clients_groups ON clients_groups.client_id=clients.id WHERE clients_groups.group_id IN (1,2,3,4,5,6,7,8,9) GROUP BY clients.id ) AS tmp; I'm using this example as part of my Ruby on Rails project. Note that my nested SQL (tmp) can have more then 10 milion records. You can split that in more SQLs if the performance is better. Should I add any indexes to make it quicker (i have it on IDs)?

    Read the article

  • make: invoke command for multiple targets of multiple files?

    - by marvin2k
    Hi, I looking to optimize an existing Makefile. It's used to create multiple plots (using Octave) for every logfile in a given directory using an scriptfile for every plot which takes a logfilename as an argument. In the Moment, I use one single rule for every kind of plot available, with a handwritten call to Octave, giving the specific scriptfile/logfile as an argument. It would be nice, if every plot has "his" octave-script as a dependency (plus the logfile, of course), so only one plot is regenerated if his script is changed. Since I don't want to type that much, I wonder how I can simplifiy this by using only one general rule to build "a" plot? To make it clearer: Logfile: "$(LOGNAME).log" Scriptfile: "plot$(PLOTNAME).m" creates "$(LOGNAME)_$(PLOTNAME).png" The first thing I had in mind: %1_%2.png: %1.log $(OCTAVE) --eval "plot$<2('$<1')" But this seems not to be allowed. Could someone give me a hint?

    Read the article

  • Xap file contents changes if built in Visual Studio or build server

    - by arch
    I'm using MEF with my Silverlight 4 app to dynamically load xap files. To optimize this process, I've removed various assemblies from my xaps since I know they've already been loaded by the base xap. This reduces the size of my dynamically loaded xaps. I accomplished this by setting the "Copy Local" flag for each assembly reference to "false". This all seems to work fine when I build in Visual Studio 2010 - my xaps are much smaller. However, when the same projects are built by the build server, all the excluded references are once again in the xap file hence tripling the size of the xap. I've read several blogs/articles regarding similar experiences but no resolution. Very frustrating - any help is appreciated.

    Read the article

  • How can I Query only __key__ on a Google Appengine PolyModel child?

    - by Gabriel
    So the situation is: I want to optimize my code some for doing counting of records. So I have a parent Model class Base, a PolyModel class Entry, and a child class of Entry Article: How would I query Article.key so I can reduce the query load but only get the Article count. My first thought was to use: q = db.GqlQuery("SELECT __key__ from Article where base = :1", i_base) but it turns out GqlQuery doesn't like that because articles are actually stored in a table called Entry. Would it be possible to Query the class attribute? something like: q = db.GqlQuery("select __key__ from Entry where base = :1 and :2 in class", i_base, 'Article') neither of which work. Turns out the answer is even easier. But I am going to finish this question because I looked everywhere for this. q = db.GqlQuery("select __key__ from Entry where base = :1 and class = :2", i_base, 'Article')

    Read the article

  • change Magento to new server, images of existing product do not shown in frontend

    - by forrest gump
    I have moved a Magento website to a new server to optimize speed. All images appear to be replaced by image placeholders. I tried to set permission for media to 777, when I delete cache media folder and refresh, a new cache folder is created automatically, but only with placeholder images inside. I flushed image cache, reindex in magento backend but still no hope. What should I do now? Somethings might be helpful: . It's fine to create new product with images . Flush cache, only placeholders images are created . tried to use magento-cleanup.php, no hope . tried to remove .htaccess in media folder . the images of products are there in the media folder, just not in cache folder. Magento only looks at the cache folder for image :(

    Read the article

  • Apache rewrite_mod: RewriteRule path and query string

    - by 1ace
    I currently have a website with a standard web interface on index.php, and I made an iPhone-friendly version in iphone.php. Both pages handle the same arguments. It works fine when I manually go to .../iphone.php, but I'd like to rewrite anything on .../path/ and .../path/index.php to iphone.php if %{HTTP_USER_AGENT} contains mobile, and optionally add the query string (not sure if/when I'd need to add it). So far, this is what I have in my .../path/.htaccess: RewriteEngine On RewriteCond %{HTTP_USER_AGENT} ^.+mobile.+$ [NC] RewriteRule index.php?(.*) iphone.php?$1 [L] RewriteRule index.php iphone.php [L] The problems are, it matches index.php in any subfolder, and it won't match .../path/?args… Special thanks to anyone who can correct/simplify/optimize anything =)

    Read the article

  • How do you design a database to allow fast multicolumn searching?

    - by Fletcher Moore
    I am creating a real estate search from RETS data, but this is a general question. When you have a variety of columns that you would like the user to be able to filter their search result by, how do you optimize this? For example, http://www.charlestonrealestateguide.com/listings.php has 16 or so optional filters. Granted, he only has up to 11,000 entries (I have the same data), but I don't imagine the search is performed with just a giant WHERE AND AND AND ... clause. Or is this typically accomplished with one giant multicolumn index? Newegg, Amazon, and countless others also have cool & fast filtering systems for large amounts of data. How do they do it? And is there a database optimization reason for the tendency to provide ranges instead of empty inputs, or is that merely for user convenience?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >