Search Results

Search found 55 results on 3 pages for 'nbolton'.

Page 1/3 | 1 2 3  | Next Page >

  • How do I draw an OpenGL point sprite using libgdx for Android?

    - by nbolton
    Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • How do I draw a point sprite using OpenGL ES on Android?

    - by nbolton
    Edit: I'm using the GL enum, which is incorrect since it's not part of OpenGL ES (see my answer). I should have used GL10, GL11 or GL20 instead. Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • How do I render only part of a texture to a point sprite in OpenGL ES for Android?

    - by nbolton
    Using the libgdx framework, I've figured out how to render a texture to a point sprite. The problem is, it renders the entire texture to the point sprite, where I only want a small part of it (since it's an isometric tile image). Here's a snippet from some demo code I wrote... create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.internal("data/tiles2.png"), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); Gdx.gl.glEnable(GL10.GL_TEXTURE_2D); Gdx.gl.glEnable(GL11.GL_POINT_SPRITE_OES); Gdx.gl11.glTexEnvi( GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE); Gdx.gl10.glPointSize(s); tiles.bind(); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); renderer.begin(GL10.GL_POINTS); // render 3 point sprites at various 3d points renderer.vertex(-.1f, 0, -.1f); renderer.vertex(0, 0, 0); renderer.vertex(.1f, 0, .1f); // ... more vertices here if needed (one for each sprite) ... renderer.end(); }

    Read the article

  • Are there advantages of using hard coded URLs for localization?

    - by nbolton
    On the Synergy website, localization is detected (and can be overridden) but uses the same URL for all languages. Some websites however, like Wikipedia have language specific subdomains. What are the advantages of having either subdomains or subdirectories (i.e. a specific URL) for each language localization? Also, should it automatically redirect the user to the specific subdomain/subdir based on the language that the browser requests? I suspect that there are advantages, which I'm guessing are: When the website appears in search results for non-English languages, the translated page description will be shown (assuming there is a translation provided by the website). When a user shares a page (e.g. through twitter), it will show in a specific language. Perhaps this is a disadvantage though? Am I correct, if so, are there more advantages?

    Read the article

  • How do I solve MSSQL 2008 install error, "The MOF compiler could not connect with the WMI server"?

    - by nbolton
    Possibly related to: SQL Server 2008 Install fails error reading etwcls.mof After manually removing MSSQL 2008 from my system (uninstall failed to remove two instances), I receive the following error when trying to re-install: The MOF compiler could not connect with the WMI server. This is either because of a semantic error such as an incompatibility with the existing WMI repository or an actual error such as the failure of the WMI server to start. It seems that mofcomp is failing with one of the .mof files, but I'm not sure which, or why. Digging through the connect article gave some indications, but no solution. I've run winmgmt /salvagerepository, which returns "WMI repository is consistent". Currently, I'm unable to install MSSQL 2008. Please help!

    Read the article

  • Why does only "network" appear in Startup Disks on my Mac?

    - by nbolton
    I have a Linux dual boot setup with my Mac (with Leopard). When I open System Preferences Startup Disk I only see "Network Startup" and no HDD or BOOTCAMP as expected. So now, annoyingly, because "Network Startup" is the only option, it tries to start using the network (the flashing globe) for a short while rather than booting directly into Mac OS X. Is there a way to either fix Startup Disk or manually hack this?

    Read the article

  • Can a wifi AP act as a client, and a server at the same time?

    - by nbolton
    I feel this is SF worthy (as opposed to SU) as I go into a bit of detail on gateways/routing. Here's my ideal setup (if possible) -- there is a wifi network (lets call it bob's) with which I want access to, but I have a few other computers on my network which I want to keep behind a firewall. So I was thinking of buying a wireless access point so that I could set it up to connect to bob's network from the AP, and then from my server, connect to the AP via ethernet. So that's the first bit. Second part is that I want to have my own private wifi network off the back of this; can I then tell the AP to serve a new network called foobar. When I say private network, I mean that my server is actually a Debian linux install with routing configured (and I also do some QoS stuff on, etc). So ideally, I'd like all the clients on the private network to be behind the server in terms of routing. However, if the private clients connect to the server via wifi, then aren't they exposed to the "public" network? That is, if someone is savvy enough to scan for my IP range. Also, to do routing I'd need to connect two ethernet cables between the server and the AP (because you can't do routing/QoS on virtual devices) -- which isn't a problem really; but I'm not sure whether the AP will allow me to separate the public and private LANs. Or, as well as the AP, am I better getting a wifi-to-ethernet adapter for the server? I could use a wifi usb, but this can be tricky to set up on headless linux; plus the signal strength is a bit lousy. If this question is a bit vague/spurious in places, please comment and I will explain in more detail.

    Read the article

  • Why do I get error, Invalid command 'PythonHandler'?

    - by nbolton
    I'm trying to deploy a Django application, but I've hit a brick wall. On Debian (latest), I've run these commands so far: apt-get install apache2 apache2-doc apache2-mpm-prefork apache2-utils libexpat1 ssl-cert libapache2-mod-python python-django I've tried adding the module manually in the Apache 2 config files, but to be honest I'm totally lost. It's totally different to Apache version 1 which I used years ago. Syntax error on line 7 of /etc/apache2/sites-enabled/000-default: Invalid command 'PythonHandler', perhaps misspelled or defined by a module not included in the server configuration I've added the following to my sites-available/default file, between the tags. <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE hellodjango1.settings PythonDebug Off </Location> Here's what tutorials I've used so far, without much luck: Django | How to use Django with Apache and mod_python | Django Documentation How To Install Django On Debian Etch (Apache2/mod_python)

    Read the article

  • Should I use /etc/bind/zones/ or /var/cache/bind/?

    - by nbolton
    Each tutorial seems to have a different opinion on this. For my ISC BIND zones, should I use /etc/bind/zones/ or /var/cache/bind/? In the last install, I used /var/cache/bind/ but only because I was guided to do so; however I just spotted a pid file in there for this new Debian install, so I figured that using the "working directory" to store zone files probably wasn't the best idea. It seems that many admins use this so they don't have to type the full path when declaring a new zone. For example: file "/etc/bind/zones/db.foobar.com"; Instead of: file "db.foobar.com"; Is obviously easier to type, but is it good or bad practice? Some may also suggest setting the working directory to /etc/bind/zones: options { // directory "/var/cache/bind"; directory "/etc/bind/zones"; } ... but something tells me this isn't good practice, since the pid file would be created there I assume (unless it's just in /var/cache/bind by coincidence). I took a look at the manpage but it didn't seem to say what the directory option was for, any ideas exactly what it was design for?

    Read the article

  • How do I solve MSSQL 2008 install error, "The MOF compiler could not connect with the WMI server"?

    - by nbolton
    Possibly related to: SQL Server 2008 Install fails error reading etwcls.mof After manually removing MSSQL 2008 from my system (uninstall failed to remove two instances), I receive the following error when trying to re-install: The MOF compiler could not connect with the WMI server. This is either because of a semantic error such as an incompatibility with the existing WMI repository or an actual error such as the failure of the WMI server to start. It seems that mofcomp is failing with one of the .mof files, but I'm not sure which, or why. Digging through the connect article gave some indications, but no solution. I've run winmgmt /salvagerepository, which returns "WMI repository is consistent". Currently, I'm unable to install MSSQL 2008. Please help!

    Read the article

  • Should websites live in /var/ or /usr/ according to recommended usage?

    - by nbolton
    According to a guide on the Linux directory structure, /usr/ is for application files, and /var/ is for files that change (I assume this means "files that belong to the applications"). Is this correct? If this is the case then I'm a little torn between using either. A website is an application (if it's dynamic, so to speak), but in other cases it is just a collection of files used by Apache. The default www dir lives in /var/www/, so should we follow suit by using /var/websites/ (or something similar), or choose /usr/websites/ since they could be applications? This is a very trivial question, but it's bugging me nonetheless. For our case, I'm leaning toward /usr/web or something like that, since our websites are all applications. Update: This is for our company websites; it's not a shared hosting server, so we don't need to worry about separating them in /home/ or anything like that.

    Read the article

  • What is the most economic hardware that will run Ubuntu? [closed]

    - by nbolton
    I'm looking to buy the cheapest hardware I can find that will run the latest version of Ubuntu desktop on some sort of usable level (e.g. use of web browser, flash, etc). I guess small form factor would be pretty convenient, so I was looking at Acer Aspire Revo for example, but I'm not sure whether or not it's overpriced. I'd rather pay less for the same thing minus brand name if it's available. Any ideas? On further investigation, it seems I'm looking for a nettop.

    Read the article

  • Is there a HP DeskJet 5940 maintenance tool for Windows 7 x64?

    - by nbolton
    According to HP, the printer is not yet supported on Windows 7, so this could well be the answer. The printer is supported by Microsoft's drivers, but as far as I can tell, this is just a basic driver with no frills. I need to clean and align my ink cartridges because the quality right now is terrible. I've tried installing the Vista x64 drivers, but the setup detects that I'm not running Vista and does not allow me to continue. I was hoping there is a hack for this, or something similar. If not, I'll just plug it into my Vista laptop.

    Read the article

  • Is ODBC on Windows 2003 slower than on Windows 7?

    - by nbolton
    I am seeing some MSSQL 2005 performance issues, and I am trying to diagnose the cause. I am using SQL profiler to gather query execution times. Both the client (using ODBC), and the SQL server are running on Windows 2003. I am also using Windows 7 (client) with a different Windows 2003 server to compare results. Windows 7 client / Windows 2003 server: SQL management studio: 393ms Through ODBC: 215ms Windows 2003 client: SQL management studio: approx 155ms Through ODBC: 3145ms ... in both cases, I'm running SQL management studio on the client. To me, these figures suggest there's something wrong with the ODBC client on the Windows 2003 server. On Windows, I see that the ODBC "SQL Server" driver is version 6.01.7600.16385 but on Windows 2003, it is 2000.86.3959.00 (by default). Could this be the problem? Is it possible to update an ODBC driver?

    Read the article

  • Why am I unable to turn off recursion in ISC BIND?

    - by nbolton
    Here's my named.conf.options file: options { directory "/var/cache/bind"; dnssec-enable yes; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; # disable recursion recursion no; }; I've tried adding allow-recursion { "none"; } before recursion but this also has no effect; I'm testing it by using nslookup on Windows, and using google.com. as the query (and it returns an IP, so I assume recursion is on). This issue occurs on two servers with similar setups.

    Read the article

  • Is there a SIP provider in the UK which provides the P-Asserted-Identity header?

    - by nbolton
    In the US, Flowroute (low cost SIP trunking provider) provides P-Asserted-Identity in the SIP invite request header (example screenshots). It also allows you to set the caller ID for outgoing calls, for example by using the follow in extensions.conf for Asterisk: exten => id,n,Set(CALLERID(all)=123) However, in the UK, I've tried a couple of SIP providers and none of them let me do either of those things (see P-Asserted-Identity or set the caller-ID). Is this because of some sort of restriction in the UK phone networks, or is it only available to really expensive SIP trunking providers?

    Read the article

  • What can cause kernel out_of_memory error?

    - by nbolton
    I'm running Debian GNU/Linux 5.0 and I'm experiencing intermittent out_of_memory errors coming from the kernel. The server stops responding to all but pings, and I have to reboot the server. # uname -a Linux xxx 2.6.18-164.9.1.el5xen #1 SMP Tue Dec 15 21:31:37 EST 2009 x86_64 GNU/Linux This seems to be the important bit from /var/log/messages Dec 28 20:16:25 slarti kernel: Call Trace: Dec 28 20:16:25 slarti kernel: [<ffffffff802bedff>] out_of_memory+0x8b/0x203 Dec 28 20:16:25 slarti kernel: [<ffffffff8020f825>] __alloc_pages+0x245/0x2ce Dec 28 20:16:25 slarti kernel: [<ffffffff8021377f>] __do_page_cache_readahead+0xc6/0x1ab Dec 28 20:16:25 slarti kernel: [<ffffffff80214015>] filemap_nopage+0x14c/0x360 Dec 28 20:16:25 slarti kernel: [<ffffffff80208ebc>] __handle_mm_fault+0x443/0x1337 Dec 28 20:16:25 slarti kernel: [<ffffffff8026766a>] do_page_fault+0xf7b/0x12e0 Dec 28 20:16:25 slarti kernel: [<ffffffff8026ef17>] monotonic_clock+0x35/0x7b Dec 28 20:16:25 slarti kernel: [<ffffffff80262da3>] thread_return+0x6c/0x113 Dec 28 20:16:25 slarti kernel: [<ffffffff8021afef>] remove_vma+0x4c/0x53 Dec 28 20:16:25 slarti kernel: [<ffffffff80264901>] _spin_lock_irqsave+0x9/0x14 Dec 28 20:16:25 slarti kernel: [<ffffffff8026082b>] error_exit+0x0/0x6e Full snippet here: http://pastebin.com/a7eWf7VZ I thought that perhaps the server was actually running out of memory (it has 1GB physical memory), but my Cacti memory graph looks OK to me... But strangely the load graph goes through the roof shortly before the kernel crashes: What logs can I look at for more info? Update: Maybe noteworthy - the CPU percentage and network traffic graphs were both normal at the time of the crash. The only abnormality was the average load graph.

    Read the article

  • Is there a USB ethernet (wired) adapter that is really compatible with Windows 7 64-bit?

    - by nbolton
    I've checked the Windows 7 compatibility site, and it lists a fair few USB ethernet (wired, not wireless) adapters that should work with Windows 7 64-bit. However, whenever I Google for the model number and Windows 7 64-bit, there's many forum posts claiming that the devices actually don't work with 64-bit (but do work with 32-bit). I've actually also found this with the LUPO USB ethernet adapter; works with 32-bit win7, but not 64-bit (no drivers available). So is there anyone out there who is 100% certain, and have actually used successfully, a 64-bit win7 capable USB ethernet adapter?

    Read the article

1 2 3  | Next Page >