Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 83/809 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Which social sign-in (Google, twitter, fb, etc) is most often used (if I could only choose one, which would statistically retain the most users)?

    - by David
    I am working with a startup which is about to do it's launch in maybe 2-3 weeks. In order to see the primary features of the site, the user has to register or sign in if they have already registered. We quickly decided we wanted to incorporate social plugins as alternatives to a conventional sign up, just like stackexchange does. But seeing that we are strapped for time and fairly amateur developers, I'm trying to justify just choosing one or two social sign-ins to start with for the launch and then maybe add more later. Based on my experience as a user, I'm guessing that twitter and google (in no particular order of importance) would probably be the most important social sign-ins in order to retain as many users as possible, but have absolutely no statistics to back that up other than my own anecdotal experience. This question hasn't been visibly asked on the internet, so I figured I'd hop on stackexchange and give it a punt. Thanks.

    Read the article

  • Are there jobs which are oriented towards optimisation programming or assembly

    - by jokoon
    3D engine programmers have to care a little about execution speed, but what about the programmers at ATI and nVidia ? How much do they need to optimize their driver applications ? Are there jobs out there who only purpose is execution speed and optimisation, or jobs for people to program only in assembly ? Please, no flame war about "premature optimisation is the root of all evil", I just want to know if such jobs exists, maybe in security ? In kernel programming ? Where ? Not at all ?

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • How to generate portal zones?

    - by Meow
    I'm developing a portal-based scene manager. Basically all it does is to check the portals against the camera frustum, and render their associated portal zones accordingly. Is there any way my editor can generate portal zones automatically with the user having to set the portals themselves only? For example, the Max Payne 1/2 engine ("Max-FX") only required to set the portal quads, unlike the C4 engine where you also have to explicitly set the portal zones.

    Read the article

  • Images Loading Very Slowly

    - by Vecta
    I'm currently working on optimizing my site to try to decrease load time by using Pingdom tools. I seem to be having some difficulty with long load times on images. For example, the body background for my site is a 29kb file but takes almost 500 ms to load, the majority of which is spent connecting to the server. This one seems to take the longest times but other images seem to take a lot of time as well—the majority of which seems to be spent connecting to the server. This also seems to fluctuate as I've seen the same image load in 500ms one minute and ten minutes later load in 1.5 seconds. My site is using the Modx CMS but I'm not sure if that would affect this at all. Is it more likely that this is a server issue? Is there anything that I should check or do to help alleviate these inflated 'connect' times?

    Read the article

  • Grid based collision - How many cells?

    - by Fibericon
    The game I'm creating is a bullet hell game, so there can be quite a few objects on the screen at any given time. It probably maxes out at about 40 enemies and 200 or so bullets. That being said, I'm splitting up the playing field into a grid for my collision checking. Right now, it's only 8 cells. How many would be optimal? I'm worried that if I use too many, I'll be wasting CPU power. My main concern is processing power, to make the game run smoothly. RAM is not a big concern for me.

    Read the article

  • How can I get the best performance for playing games?

    - by Oli
    I've been playing a couple of Wine-games today and decided to switch to metacity to see what the performance difference was like. If you've never done it before, you just run metacity --replace but don't do that if you use Unity! Anyway, surprise surprise it was like playing on a dedicated Windows gaming machine. Playing under metacity today was bliss. Much higher framerates and just a fluidity that you'd expect from a native game. I'm not sure I can go back. Switching to metacity is no hardship but I wonder if there's anything else in the WM landscape that I should try out. I'm essentially looking for suggestions for the best way to play games. Mix up WMs, dedicated X sessions, whatever... As long as it makes Wine games run faster. Small print One process per answer (eg: New X session + OpenBox) We should probably land on a benchmark so we can show percentage improvement over a stock Compiz desktop. I'm open to suggestions in the comments. If people could test it and submit their how much it improves things for them in the comments, that would give others a good idea of if it's worth the pain.

    Read the article

  • What are some efficient ways to set up my environment when working on a remote site?

    - by Prefix
    Hello fellow Programmers, I am still a relatively new programmer and have recently gotten my first on-campus programming position. I am the sole dev responsible for 8 domains as well as 3 small sized PHP web apps. The campus has its web environment divided into staging and live servers -- we develop on the staging via SFTP and then push the updates to the live server through a web GUI. I use Sublime Text 2 and the Sublime SFTP plugin currently for all my dev work (its my preferred editor). If I am just making an edit to a page I'll open that individual file via the ftp browser. If I am working on the PHP web app projects, I have the app directory mapped to a local folder so that when I save locally the file is auto-uploaded through Sublime SFTP. I feel like this workflow is slow and sub-optimal. How can I improve my workflow for working with remote content? I'd love to set up a local environment on my machine as that would eliminate the constant SFTP upload/download, but as I said there are many sites and the space required for a local copy of the entire domain would be quite large and complex; not to mention keeping it updated with whatever the latest on the staging server is would be a nightmare. Anyone know how I can improve my general web dev workflow from what I've described? I'd really like to cut out constantly editing over FTP but I'm not sure where to start other than ripping the entire directory and dumping it into XAMP.

    Read the article

  • The most efficent ways for drawing lines all day long with OpenGL

    - by nkint
    I'd like to put a computer screen that is running an OpenGL programs in a room. It has to run all day long (not in the night). I'd like to draw lines that are slowly fading in the background. The setting is simple: a uniform color background (say, black) and colored lines (say, white) that are slowly fading out. With slowly I mean.. hours. Say that the first line I draw is with alpha 255 (fully visible), after one hours is 240. After 10 hours is 105. One line could have 250 points and there will be like 300 line in one day. For now I have done a prototype with very rudimentary method like: glBegin( GL_LINE_STRIP ); iterator = point_list.begin(); for (++iterator, end = point_list.end(); iterator != end; ++iterator) { const Vec3D &v = *iterator; glVertex2f(v.x(), v.y()); } glEnd(); More efficient method?

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Good resources for learning about graphics hardware

    - by Ken
    I'm looking for some good learning resources for graphics hardware (and associated low level software). Basically I want to learn more about what goes on underneath the opengl/direcx API layers in terms of how things are implemented. I familiar with what happens in principle during the various stages of the rendering pipeline (viewing, projection, clipping, rasterization etc). My goal is to be able to make better and more informed decisions about tradeoffs and potential optimisations when graphics/shader programming with respect to the following kinds of issues; batching view culling occlusions draw order avoiding state changes triangles vs pointsprites texture sampling etc Basically whatever the graphics programmer needs to know about modern graphics hardware in order to become more effective. I'm not really looking for specific optimisation techniques, rather I need more general knowledge so that I will naturally write more efficient code.

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • I'm a premature optimizer

    - by Matthew Day
    I work in a small sized software/web development company. I have gotten into the habit of optimizing prematurely, I know it is evil and promotes bad code... But I have been working at this firm for a long while and I have deemed this as a necessary evil. It has never caused me an issue so far in the past, but it might if I get partners or a successor. The point of this long-winded speech is that, should I change my evil practices to 'save face' and to help out in the future?

    Read the article

  • OpenGL VBOs are slower then glDrawArrays.

    - by Arelius
    So, this seems odd to me. I upload a large buffer of vertices, then every frame I call glBindbuffer and then the appropriate gl*Pointer functions with offsets into the buffer, then I use glDrawArrays to draw all of my triangles. I'm only drawing about 100K triangles, however I'm getting about 15FPS. This is where it gets weird, if I change it to not call glBindBuffer, then change the gl*Pointer calls to be actual pointers into the array I have in system memory, and then call glDrawArrays the same, my framerate jumps up to about 50FPS. Any idea what I weird thing I could be doing that would cause this? Did I maybe forget to call glEnable(GL_ALLOW_VBOS_TO_RUN_FAST) or something?

    Read the article

  • Good practices when optimizing HTML5/Javascript Game Developement [closed]

    - by hustlerinc
    I'm just starting out as a game developer and have created a few crappy but playable clones of classic games like pong, and bomberman. Being self taught (bless the internet) I do this by just stuffing in code to make the games work. Now I feel the time has come to create something complete, for this I need to know how a game is structured. I've searched on the web but there isn't that much to be found. The only "high-level" language I know is javascript so reading a tutorial or article based on C++ doesn't help me that much. I'm looking for good resource's pedagogically covering the theory and possibly examples (in Javascript or pseudo code that is understandable for a beginner) of how the game pieces fit together. From the start screen to asset loading and running the game loop. I'm not looking for anything complicated like reading through a 4000 line source code. All I want to learn is where, how and when the main parts of every game should be called. If you know any good resources to share, or maybe even have an answer for me I would deeply appreciate it.

    Read the article

  • Why I loose my page rank after 301 redirect?

    - by rajesh.magar
    As we all know Google treat sub-domain as completely separate domain so we have to fight for both, to get ranked in search results. Right! So one of my client website was like they having "example.com" and "Blog.example.com". So in mind to keep all stuff in one place we redirect "blog.example.com" to "example.com/blog/" But in this case we where loose our pagerank and still wondering where we went wrong or it just take few more time to showoff. so what is the reason behind this will be. Thanks in advance.

    Read the article

  • How should I choose quadtree depth?

    - by Evpok
    I'm using a quadtree to prune collision detection pairs in a 2d world. How should I choose to what depth said quadtree is calculated? The world is made mostly of moving objects1, so the cost of dispatching the objects between the quadtree cells matters. What is the relationship between the gain from less collision checking and the loss from more dispatching? How can I strike a balance that performs optimally? 1 To be completely explicit, they are autonomous self-replicating cells competing for food sources. This is an attempt to show my pupils predator-prey dynamics and genetic evolution at work.

    Read the article

  • How can I maximum compress video files?

    - by EmmyS
    I received 4 .mov files from a client that they want on their mobile website via SlideShowPro. Each original file was between 200 and 400 mb. I've gotten each one down to about 30 mb using transmageddon as described here, but that's still really big for a mobile connection. Is there any way to shrink them even further? Maybe it's the settings; I used Output Format = MPEG4, Audio = AAC, Video = H264 (which is what is suggested by SlideShowPro.)

    Read the article

  • How do I optimize SEO in a multiblog WordPress install?

    - by user35585
    We are about to launch two product pages plus a corporate website. The goal is to keep a blog in all of the sites, but here it comes the question about how to do it in a way we get everything unified but do not mess with Google's web crawlers. We considered the following options: Putting a blog from which we retrieve two categories with custom CSS, so we have a blog that sub splits two category-dependent blogs; this way we can get the feeds and will point to it Putting two product blogs of which we retrieve their posts into a bigger, corporate blog Putting three independent blogs Despite I was for the first option, so we only have to address our content from the product pages, I would sincerely like to hear your opinion. We are afraid duplicate content or strange link games may make us lose PageRank. How would you do it?

    Read the article

  • I'm seeing a lot of a new format for title elements for major sites: a few tags, and only then the title of the site itself, is this now optimal SEO?

    - by Tchalvak
    I'm seeing a lot of sites go to a format similar to the one now in use by stackexchange sites (i.e. short topic - maybe a long topic - only finally the site's name). Is this optimal SEO? It seems kind weird both because I'm only begun to notice changes in this direction recently, and because it seems like it would make it harder to tell in search results which actual site you're visiting, even if the topic is one that matches. Still, sites like Facebook, StackOverflow, etc probably aren't wrong, so I'm wondering if I should try to make my sites use that format, going forwards...

    Read the article

  • FxCop / Code Analysis with VS2010 Ultimate

    - by Cuartico
    I've getting some information about this, but I still can find a proper answer, I was asked recently in my company for this : "run a fxcop analysis on that code and tell me the results". Ok, I have VS2010 Ultimate which has code analysis, but before making any comment, I browse it on the internet cause I want to implement the best choice... So, let's say I'm gonna use the same rules on both analyzers: Should I recommend using one above the other? Should I say "hey, thats kinda old, let's use code analysis!" Should I get the same results on different computers? (for what I undersand, fxcop gives you some "points" and for what I've read, sometimes it gives you diff points on diff computers, I don't know about this with code analysis Thanks, any help would be appreciated

    Read the article

  • Help w/ iPad 1 performance for tile-based DOM Javascript game

    - by butr0s
    I've made a 2D tile-based game with DOM/Javascript. For each level, the map data is loaded and parsed, then lots of tiles ( elements) are drawn onto a larger "map" element. The map is inside of a container that hides overflow, so I can move the map element around by positioning it absolutely. Works a treat on desktop browsers, and my iPad 2. My problem is that performance is really bad on iPad 1. The performance hit is directly related to all the tile elements in my map, because when I remove or reduce the number of tiles drawn, performance improves. Optimizing my collision detection loop has no effect. My first thought was to batch groups of tiles into containers, then hide/show them based on proximity to the player, however this still causes a huge hiccup when the player moves and a new group of tiles is displayed (offscreen). Actually removing the out-of-sight elements from the DOM, then re-adding them as necessary is no faster. Anyone know of any tips that might speed up DOM performance here? My map is 1920 x 1920 pixels, so as far as I know should be within the WebKit texture limit on iOS 5/iPad. The map is being moved with CSS3 transforms, and I've picked all the other obvious low-hanging fruit.

    Read the article

  • Why I lose my page rank after 301 redirect?

    - by rajesh.magar
    As we all know Google treats sub-domains as completely separate domains so we have to fight for both, to get ranked in search results. One of my client website was like they having example.com and blog.example.com. So in mind to keep all stuff in one place we redirect blog.example.com to example.com/blog/ But in this case we lost our pagerank and are still wondering where we went wrong or it just takes few more time to showoff. So what is the reason behind this?

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • Optimising news fetching

    - by aceBox
    I have a web scraper for scraping news from different sources in wp7. My current appraoch for doing this is: load newspapers information from xml file. go to the specified sections and fetch the urls of the news items. go to each url and fetch headline, image, publisher. display using a MVVM architecture of windows phone. The whole thing takes place asynchronously...meaning as soon as url from a section of a newspaper is fetched it is added to the queue, and the second stage consisting of fetching headline, image etc starts... and as soon this is fetched even for one article, it is displayed. Later on as more articles are fetched, they are added on to the list. For the fetching purpose I am using a SmartThreadPool(http://www.codeproject.com/Articles/7933/Smart-Thread-Pool) for windows phone. My problem is that...even for fetching around 80 items (in total) from 9 publications, it is taking more than a minute. How can i speed up the procedure? Note: I have a two stage approach because many times the images are not available with headlines, and are only found in the article.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >