Search Results

Search found 13104 results on 525 pages for 'non blocking'.

Page 135/525 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Online accounts advanced setting with Empathy (13.10)

    - by uruloke
    the new online accounts doesn't have the advanced settings as the empathy accounts had. How do i change the google server to connect to? i read here: https://wiki.gnome.org/Empathy/FAQ I can't connect to my Google Talk account Your router is probably blocking DNS SRV requests. If possible you should try to fix it. If you can't, the easiest work around is to set "talk.google.com" in the "Server" field of the advanced section of the account. So i think this might fix my problem, or maybe just an option to shift the port it connects to. and is there anyone that knows how to use join any IRC channels with Empathy? i have installed the plugin, but i don't know how to join a channel.

    Read the article

  • Game Code Design for Rendering

    - by kuroutadori
    I first created a game on the iPhone and I'm now porting it to Android. I wrote most of the code in C++, but when it came to porting it wasn't so easy. The Android way is to have two threads, one for rendering and one for updating. This due to some devices blocking when updating the hardware. My problem is that I am coming from the iPhone. When I transition, say from the Menu to the Game, I would stop the Animation (Rendering) and load up the next Manager (the Menu has a Manager and so has the Game). I could implement the same thing on Android, but I have noticed on game ports like Quake, don't do this - as far as I can tell. I have learnt that I cannot just dynamically add another Renderer class the the tree because I will probably get a dequeuing buffer error - which I believe to be a problem with the OpenGL ES side. So how is it done?

    Read the article

  • still getting 403 on apt-get install: no proxy, urls seems valid

    - by Berry Tsakala
    i'm trying to install libreoffice (or openoffice) on Ubuntu server 12.10, the packages exist - verified with "apt-cache search", the file /etc/apt/apt.conf.d/30proxy doesn't exist on my system the text 'proxy' isn't mention in grep proxy /etc/apt/apt.conf.d/ other packages that i tried to apt-get-install -- are installed OK. the only thing i haven't done is to replace the respository servers; i'm afraid it can break the dpkg system! related questions http://askubuntu.com/questions/304340/apt-get-403-forbidden?rq=1 http://askubuntu.com/questions/303150/apt-get-403-forbidden-but-accessible-in-the-browser http://askubuntu.com/questions/409998/proxy-blocking-apt-get-allowing-wget-curl http://askubuntu.com/questions/367737/apt-get-upgrade-gives-403-forbidden-error?rq=1 What else can I do to solve this 403 error and install liber/open-office using apt?

    Read the article

  • Is my robots.txt working as it should?

    - by TigerBlood
    I want crawlers to have access to http://www.example.com but not http://www.example.com/ My robots.txt is as follows: User-agent: * Allow: /$ Disallow: / My site is in google search results, but I am not coming up in Bing, Yahoo, etc. I have had the same robots.txt since last year, and I initially requested inclusion ~1 year ago, having also resubmitted the URL to those latter search engines several times since as well. Is my robots.txt blocking those other crawlers? And if so, why not google as well? Thanks in advance!

    Read the article

  • How to add a sound that an enemy AI can hear?

    - by Chris
    Given: a 2D top down game Tiles are stored just in a 2D array Every tile has a property - dampen (so bricks might be -50db, air might be -1) From this I want to add it so a sound is generated at point x1, y1 and it "ripples out". The image below kind of outlines it better. Obviously the end goal is that the AI enemy can "hear" the sound - but if a wall is blocking it, the sound doesn't travel as far. Red is the wall, which has a dampen of 50db. I think in the 3rd game tick I am confusing my maths. What would be the best way of implementing this?

    Read the article

  • How to secure robots.txt file?

    - by CompilingCyborg
    I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: * Disallow: */* Sitemap: http://www.mydomain.com/sitemap.xml My Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file? For Reference: Here is a good tutorial for robots.txt #Add this if you want to stop Alexa from indexing your site. User-agent: ia_archiver Disallow: / #Add this to stop duggmirror User-agent: duggmirror Disallow: / #Add this to allow specific agents User-agent: Googlebot Disallow: #Add this to allow all agents while blocking specific directories User-agent: * Disallow: /cgi-bin/ Disallow: /*?*

    Read the article

  • Best way to handle realtime melee AI in authoritative network environment

    - by PrimeDerektive
    So i've been working on a multiplayer game for a bit; it's a co-op action RPG with real-time combat. If you've seen or played TERA, I'd say it's comparable to that, but not an MMO, heh. I'm currently handling the AI units authoritatively, the server calculates their pathing, movement, and pursue/attack logic, and syncs the movement to the clients 15x per second, and the state changes when they happen. When I emulate 200ms ping, though, the client can perceive being out of range to an AI's attack, but still take the hit, because on the server he hadn't moved that far yet. This also plays hell with my real-time blocking. I don't really want to allow the clients to be allowed to say "that was out of range" or "I blocked that", but I'm not really sure how else to handle it.

    Read the article

  • What is So Unique About Node.js?

    - by Adrian Shum
    Recently there has been a lot of praise for Node.js. I am not a developer that has had much exposure to network application. From my bare understanding of Nodes.js, its strength is: we have only one thread handling multiple connections, providing an event-based architecture. However, for example in Java, I can create only one thread using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc) ? Given JVM being a even more mature VM than V8 (I expect it to run faster too), and event-based handling architecture seems to be something not difficult to create, I am not sure why Node.js is attracting so much attention. Did I miss some important points?

    Read the article

  • Ubuntu 13.10 install VMware 9.0

    - by user212290
    After I install the VMware workstation 9.0, while when I want open the VM, there come the dialogue "Before you can run VMware, several modules must be complied and loaded into the running kernel CANCEL INSTALL",while I clicked the INSTALL button, nothing happened. When: sudo apt-get install linux-headers-3.11.0-12-generic sudo /usr/bin/vmware-modconfig --icon=vmware-workstation --appname=VMware come: cc1: some warnings being treated as errors make[2]: *** [/tmp/modconfig-T9k19t/vmci-only/linux/driver.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [_module_/tmp/modconfig-T9k19t/vmci-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.11.0-12-generic' make: *** [vmci.ko] Error 2 make: Leaving directory `/tmp/modconfig-T9k19t/vmci-only' Failed to build vmci. Failed to execute the build command. Starting VMware services: Virtual machine monitor done Virtual machine communication interface failed VM communication interface socket family done Blocking file system done Virtual ethernet failed VMware Authentication Daemon done

    Read the article

  • Is there a class or id that will cause an ad to be blocked by most major adblockers?

    - by Moak
    Is there a general class or ID for a html element that a high majority of popular adblockers will block on a website it has no information for. My intention is to have my advertisement blocked, avoiding automatic blocking is easy enough... I was thinking of maybe lending some ids or classes from big advertisment companies that area already being fought off quite actively. right now my html <ul id=partners> <li class=advertisment><a href=# class=sponsor><img class=banner></a></li> </ul> Will this work or is there a more solid approach?

    Read the article

  • Why Nodes.js being that "unique"?

    - by Adrian Shum
    Recently years there are lots of praise to Nodes.js. I am not a developer that have much exposure on network application. From my bare understanding of Nodes.js, its strength is: We are having only on thread handling multiple connections, providing a event-based architecture. However, for example in Java, what if I am having only one thread, using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc) ? Given JVM being a even more mature VM than V8 (I expect it run faster too), and event-based handling architecture seems not something difficult to create. I am not sure why Nodes.js is attracting so much attention. Did I miss some important points?

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • How best to take a users signature online? (UK law orientated) [closed]

    - by Ben Griffiths
    Not sure if this is the best place to ask, but I can't seem to find any of the other SE sites that would fit better (unless there's a law one?) I'm building an application that will replace an existing paper based form, and this form would normally be signed by the person filling it in. Looking around, it's hard to find a good definitive resource to explain what I can and cannot accept as far as a signature goes. It looks like some UK government online forms accept just your name typed into a box, but I've also heard you should back up with an email - so that process would be type name into a box along with providing an email address, send out an email, then make them click a link within the email to finally complete the verification. Involving email seems very long winded and leaves the system open to spam filters blocking emails, forgotten emails that just sit in inbox's etc. So, does anyone have any knowledge in this department? Personally, I'd love to just get them to type their name into a box and be done with it!

    Read the article

  • I've changed my URL schema. How do I tell Google to index the new schema and forget the old one?

    - by growse
    I had a site where the urls were constructed like this /index.php/Topic /index.php/AnotherTopic These were indexed in google, and search results returned that pointed to these. However, I've recently replatformed that site, and reconfigured it so the above urls would be: /index.php?title=Topic /index.php?title=AnotherTopic The original urls are returning 404s. The site is linking to the correct URL schema internally, but Google is retaining the original schema in its search results. I've updated and resubmitted the sitemap which only contains the new schema. Also, Google's webmasters tool is going slightly bananas at the fact there's now a spike in 404 errors in its crawl results. What would be the best approach to get Google to 'forget' about the old schema, and instead index the new schema? Should I try blocking /index.php/ in robots.txt? Should I be returning 301 codes instead of 404 for the original urls?

    Read the article

  • Queuing rpc calls

    - by alfa64
    i'm designing a system wich listen to json rpc calls from clients, piles it up inside a list, and if it gets full it should store them in a DB and keep recieving calls. My original plan is to listen to the rpc calls from Perl with the json-rpc and put them in the array. The clients do some long polling in another server to get responses as they appear. What is this blocking/noblocking thing? Should i do a script for node.js to listen to the calls? What do you think is a good practice in this case? The objective is to listen as much calls as possible.

    Read the article

  • Error installing gPodder

    - by Ron Webb
    A few weeks ago (newbie alert!) I started using XUbuntu 12.04 with Xfce 4.8. I'm trying to install gPodder Podcast Client (see https://launchpad.net/~thp/+archive/gpodder). I've added the PPA via terminal commands as instructed. When I click the Install button in the Ubuntu Software Centre I get the following error: Package dependencies cannot be resolved This error could be caused by required additional software packages which are missing or not installable. Furthermore there could be a conflict between software packages which are not allowed to be installed at the same time. Details: The following packages have unmet dependencies: gpodder: Depends: python-webkit but it is not going to be installed What do I need to do? Just to make thing more complicated -- I'm not sure, but before I found the launchpad.net link, I think I may have tried to install gPodder from the default Ubuntu repositories (also unsuccessfully). There may be remnants of the previous attempt still installed, which may be blocking the new install. Where/how can I find them?

    Read the article

  • Internet keep dropping in Ubuntu 12.04

    - by Kokuryuu
    I am having an issue with my Ubuntu, I keep getting "server not found" errors when I try to go online, I have tried pinging my DNS and Goggle, and it seems like no packets are dropped, but still my Internet is not working Also my other PCs in the house have working Internet connection. I am sure there is no firewall active here, also nothing on my router is blocking, and I checked cables. I have no issues in the same PC in Windows. It is only in Ubuntu. Any ideas?

    Read the article

  • Thread count in Java game

    - by Taylor Hill
    I'm just curious as to what a reasonable number of threads is for a simple 2D mmo in Java. Is it reasonable to have two threads per connection, one for the input stream and one for the output stream? The reason I ask is because I use a blocking method on the input stream, and a workaround seems unnecessarily complex if I were to try to get around it without adding threads. This is mostly for my own edification; I don't expect to have 5 million people playing it ever, or even 5, but I'm wondering what a good scalable solution is, and if this is reasonable for a small server (<30 connections).

    Read the article

  • Does the Instant Preview in Google webmaster tools takes Robot.txt in account?

    - by rockyraw
    Is that the way to go If I want to visually see what the googlebot see? I'm trying to check a folder which I have just blocked in my robots.txt. If I fetch the folder as google bot, It fetches ok, so that doesn't tell me nothing about whether the block is working I know there's a tool to check for blocking, but it is dependent on the input of the robots.txt Therefore I've tried the Instant preview, and I don't get a preview for what the bot sees ("pre-render), so I think that means that it's because the robots.txt blocks it; however - I don't see the bot tried beforehand to access my updated robots.txt, so I'm not sure how does it know that this folder is blocked? (it does preview another new folder, that is not blocked)

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Release Notes 12/12/2012

    This past week the CodePlex team worked on several fixes to improve the stability of our TFS infrastructure, including applying TFS 2012 Update 1. We apologize for the recent downtime. We are not completely out of the woods, but we appreciate your patience as we work through the issues. Additional Bug Fixes: Fixed several issues with character encoding within file paths. Fixed issue where the number of pull requests and forks were disappearing after selecting either link. Fixed issue blocking license changes when special characters exist in copyright holder field. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • Scalability of multi-threading in game server

    - by Taylor Hill
    What is a reasonable number of threads for a simple 2D mmo in Java? Is it reasonable to have two threads per connection, one for the input stream and one for the output stream? The reason I ask is because I use a blocking method on the input stream, and a workaround seems unnecessarily complex if I were to try to get around it without adding threads. This is mostly for my own edification; I don't expect to have 5 million people playing it ever, or even 5, but I'm wondering what a good scalable solution is, and if this is reasonable for a small server (<30 connections).

    Read the article

  • jQuery scrolling images for e-commerce site, what to do about users who disable JS

    - by Livingston Storm
    As the title suggests, I am developing an e-commerce site and I intend of having two jQuery plug ins on the default page, one for scrolling images and the other for the navigation menu. Should I be concerned about making the site work if the users disables JS? Cause if they have it disabled my site would be almost impossible to use with the scrolling images blocking the main content. Plus the CMS I am using, Big Commerce, uses a bit of JS for the products pages, which would also look ridiculous with JS disabled. Anyone have experience with this?

    Read the article

  • Video Bug in Ubuntu 12.10 MSI CX623 laptop

    - by user104731
    http://www.youtube.com/watch?v=5xyJkS8XjgA&feature=youtu.be I managed to take a video with what happens on my Ubuntu 12.10 laptop when I play with the volume and hover with the mouse over the slider and when I press Ctrl + Alt + D (show Desktop/minimize all) The real bugs are those in which the panel becomes transparent on the down side, exactly above the volume slider (and not only when that slider appears, but also when brightness slider appears, and when blocking my touchpad applet appears) and when I have a maximized window, more maximized windows or a non maximized window that touches the panel, while I use Ctrl+Alt+D (show Desktop/hide all normal windows). Is there a way to solve this bug? PS: The other bugs are from Record My Desktop. I didn't have the mentioned bugs in ubuntu 12.04, but I like more the graphics in 12.10.

    Read the article

  • Alt text vs CSS sprites (SEO vs speed)

    - by leeoniya
    I'm reworking our site to reduce HTTP requests and blocking requests by concatenating JS, css, gzipping, loading all JS via LABjs and using CSS sprites for images that were loaded individually via <img> tags before. Progress has been great so far - 5x page load performance improvement. However, we're in the top 5 organic search ranking in google for many targeted keywords and phrases. I'm afraid eliminating so many img tags with alt attributes can hurt our SEO. Does anyone have any experience with alt tag manip/removal and effects on SEO positions? Is previous rank "sticky"?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >