Search Results

Search found 1783 results on 72 pages for 'computation theory'.

Page 51/72 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • 2d, Top-down map with different levels

    - by Ktash
    So, I'm creating a 2d, top down, sprite based (tiled) game, and right now I'm working on maps (well, a map editor at the moment, but it will be creating my maps, so basically the same thing). The scenario So, I'm thinking about efficiency and creating a map in pieces. In each piece, I plan on having 'layers'. Basically, I plan on rendering it down to a 'below hero' level, and an 'above hero' level, with the hero rendered in between obviously. There will likely also be a 'on level with hero' layer, but I'm not quite there yet. Not even worrying about events or interaction yet. Just looking to get a hero on the screen. Now for movement, I obviously need to know what tiles can be moved and in what direction. My plan at the moment is each tile getting 8 bits (4 'can enter in direction' bits, 4 'can leave in direction'). This will allow me to limit movement and even allow one way directional movement. The dilemma This works great for a lot of scenarios. It will allow me to store a map in essentially 3 layers, a string, and gives me flexibility going forward. However, I can't create maps that themselves have layers. A good example is a bridge where the user can go under or over the bridge without invalid moves being allowed. I can't create a platform and allow movement underneath. These are things I would like to be able to include in my game. My idea In theory, I could allow multiple hero layers and then allow multiple sets of 'below' and 'above' layers (or sandwich layers). But this complicates my system, and makes movement between maps potentially tricky (If the hero is on the third layer at the edge of a map, but that corresponds to the second layer on the other map, how can I allow or disallow movement). My question Is there a better way to manage multiple maps with multiple levels like this where a users level may be 'connected' on different levels on different maps? Or even... Am I doing this the hard way? Is there a more standard way to handle top-down 2d tiled maps that I am just not aware of? Things to note or that might be helpful This will be done in Javascript (transferred around in JSON) State will need to be transferred quickly, so a map-id and x/y/direction should be enough to get me a boolean 'can move' value Maps will not be standard sized (though they will be in a certain number of tiles) Making an editor tool so that I can have others help, so something that I can create in a tool would be helpful 'Teleportation' locations will likely need to exist to get into building maps and to and from different map sets (which will not necessarily be connected), but have not been created yet (lumping in with events at the moment).

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

  • Are we queueing and serializing properly?

    - by insta
    We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. The server is at 16GB of memory usage and growing, just for queueing. Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment? I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of "1000 per minute" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages.

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Poor performance of single processor 32bit Windows XP xompared SMP in VBA+Excel

    - by Adam Ryczkowski
    Welcome! On many computers I experienced poor performance of 32 bit guests running on 64 bit Linux host (I used only the Debian family). At last I managed to collect benchmark data. I made the benchmark by running custom VBA macro, (which we use in our company) that generates 284 pages long Word document full of Excel Pie charts, tables and comments. The macro is run as a single task (excluding the standard services) on a set of identically configured Windows XP 32-bit systems. I measured the time (in sec.) needed to perform the test. The computer (i.e. my notebook Asus P53E) supports both VT-d extensions and native Windows XP. It has 2-core processor, each core is hyperthreaded, so in total we have 4 mostly independent execution units. I use the latest VirtualBox 4.2 and VMWare Workstation 9.0 for Linux, installed together on the same host (running Mint 13 Maya) but never run simultaneously. The results (in column Time) are no less accurate than ± 10% Here are the results (sorry for the format, but I couldn't find out a better solution for tables in SO): +---------------+-------------+------------------------------------------------------+---------+------------+----------------+------+ | Host software | # processor | Windows kernel | IO APIC | VT-x/AMD-V | 2D Video Accel | Time | +---------------+-------------+------------------------------------------------------+---------+------------+----------------+------+ | VirtualBox | 1 | Advanced Configuration and Power Interface (ACPI) PC | 0 | 1 | 0 | 1139 | | VirtualBox | 1 | Advanced Configuration and Power Interface (ACPI) PC | 0 | 1 | 1 | 1050 | | VirtualBox | 1 | Advanced Configuration and Power Interface (ACPI) PC | 0 | 0 | 1 | 1644 | | VirtualBox | 4 | ACPI Multiprocessor PC | 1 | 1 | 1 | 6809 | | VMWare | 1 | ACPI Uniprocessor PC | | 1 | 1 | 1175 | | VMWare | 4 | ACPI Multiprocessor PC | | 1 | 1 | 3412 | | Native | 4 | ACPI Multiprocessor PC | | | | 1693 | | Native | 1 | Advanced Configuration and Power Interface (ACPI) PC | | | | 1170 | +---------------+-------------+------------------------------------------------------+---------+------------+----------------+------+ Here are the striking conclusions: Although I've read in the VirtualBox fora about abysmal performance with 32-bit guest on 64-bit host, VMWare also has problems compared to native run, still being twice faster(!) than VBox. Although VBA is inherently single-threaded, the Excel calculations, which take much more than a half of total computation time, supposedly aren't. So one would expect some speed gain when running on 2+ cores ("+" for hyperthreading). What we see is a speed loss. And quite big one too. For the VirtualBox the VT-d extension isn't a big deal. Can anyone shed some light on why the singlethreaded Windows kernel is so much faster than the SMP one?

    Read the article

  • Have I bricked my Sun V20z?

    - by David Mackintosh
    I have a small pile of Sun V20z computers. I was trying to update the SP and BIOS firmwares in order to bring them all up to the same standard -- mostly to get the updated (ie actually useful) SP functionality, and figured that I would just do the BIOS while I was at it. For three of the four computers, it worked perfectly. However after the BIOS update, the fourth system won't boot. I did this: batch05-mgmt $ sp get mounts Local Remote /mnt 10.16.0.8:/export/v20z batch05-mgmt $ platform set os state update-bios /mnt/sw_images/platform/firmware/bios/V1.35.3.2/bios.sp This command may take several minutes. Please be patient. Bios started Bios Flash Transmit Started Bios Flash Transmit Complete Bios Flash update Progress: 7 Bios Flash update Progress: 6 Bios Flash update Progress: 5 Bios Flash update Progress: 4 Bios Flash update Progress: 3 Bios Flash update Progress: 2 Bios Flash update Progress: 1 Bios Flash update complete batch05-mgmt $ platform set power state on This command may take several minutes. Please be patient. After an hour of waiting, it still won't start. The chassis powers on, but beyond the fans spinning up and the hardware POST of the drives, nothing appears to happen. So if I try to re-flash the BIOS (on the theory that maybe something went wrong): batch05-mgmt $ platform set os state update-bios /mnt/sw_images/platform/firmware/bios/V1.35.3.2/bios.sp This command may take several minutes. Please be patient. Bios started Error. The operation timed out. Have I bricked it?

    Read the article

  • Why does my Intel HDA onboard sound card not have a "Mix" device / channel?

    - by Hanno Fietz
    I want to be able to record what my sound card outputs on the speakers / headphones. This question is all over the interwebs again and again, and there seem to be two outcomes: in your selection of audio input devices, there's a device called "Stereo Mix", or similar, which is the "loopback" device for audio. Choose that in your recording tool and you're done. there's no such device and only speculative posts about why that may be. Now, I'm using ALSA and an Intel HDA chipset on my mainboard under Kubuntu Karmic. I have some 5-10 output channels and "Mic", "Front Mic" and "Line" for input. All of those are available in KMix, Audacity and other software. No "loopback" / "Mix" / whatever. Do I have to get some driver / kernel module set up ALSA in some way set up my system configuration in some way use a software solution (such as JACK) I had a look at JACK, and found it rather hard to understand, it's either an expert tool or just clumsy, I couldn't say. At least, I wasn't able to figure out how to achieve what I wanted. One of my problems seems to be that I don't understand where and how the mixing happens. Are there sound cards which just aren't able to do it? Why does the sound card matter at all, since I could in theory grab the data stream at some point before it goes to the hardware, right?

    Read the article

  • Tasks not appearing in Mac Outlook 2011

    - by Tama
    My current workplace uses Macs and my old workplaces used Windows. In my old workplaces I heavily used Outlook's Task functionality to manage my workload. I understand that the Task functionality in Outlook 2011 for Mac is heavily limited so I was very pleased to find this useful "how-to" on making the most of Tasks. My problem is that my tasks don't appear in the Task folder, or anywhere else for that matter. Even if I search for a the title of a task I've recently found I still can't find them. After some Googling I found this forum thread that suggests it may be a problem with the Outlook database, which points to a Microsoft KB. So I went through all of the recommended steps on rebuilding/ adding a new identity using the "Microsoft Database Utility" - the theory being that if I create a new identity I can test the task creation using a "blank slate" identity. When I change the default identity to my newly created identity using the Microsoft Database Utility (have to restart the computer) Task creation still doesn't work. Any ideas appreciated, I really miss the task functionality in Outlook 2010 for Windows.

    Read the article

  • VSS Post Backup failures for Virtual Server 2005 R2 SP1 virtual machines

    - by califguy4christ
    We've been seeing strange errors with Volume Shadow Copy services on our Virtual Server 2005 R2 SP1 host. It appears to be failing on a strange mountpoint in the C:\WINDOWS\Temp\ folders, which I believe is used by VSS to mount a writeable image file. To summarize: The Microsoft Virtual Server 2005 Writer continually goes into a failed retryable state The Virtual Server log reports errors during the Post Backup phase VSS reports errors backing up a mount point of unknown origins The mount point causes NTFS and ftdisk errors The host is x86 Windows Server 2003 Standard, SP2. The virtual machine is the same. Both use basic disks. Here is the writer state: Writer name: 'Microsoft Virtual Server 2005 Writer' Writer Id: {76afb926-87ad-4a20-a50f-cdc69412ddfc} Writer Instance Id: {78df98e2-bf19-4804-890b-15865efef3bd} State: [11] Failed Last error: Retryable error From the Virtual Server log: Virtual Server - Vss Writer - Event ID: 1035: The VSS writer for Virtual Server failed during the PostBackup phase. The guest shadow copies did not get exposed on the host machine, after mounting all the virtual hard disks of the virtual machine VMACHINE. From the Application log: VSS - None - Event ID: 12290: Volume Shadow Copy Service warning: GetVolumeInformationW( \\?\Volume{fb84bae7-87f5-11dd-9832-001cc4961ca6}\,NULL,0, NULL,NULL,[0x00000000], , 260) == 0x0000045d. hr = 0x00000000. From the System log: Ntfs - Disk - Event ID: 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:\WINDOWS\Temp\ {fb84bae7-87f5-11dd-9832-001cc49.... My current theory is that VSS creates a mount point for an image file of the VHD, then the software panics for some reason, leaving everything in an inconsistent state. Removing the mount point doesn't resolve the problem. All of the other disks check out fine with CHKDSK. There's no exclusion option for VHDs or to turn off online backups. Has anyone seen this kind of thing before or point me in the right direction for getting more information about the mount point and it's origins? I haven't been able to trace what application is creating that mount point.

    Read the article

  • setting up synergy (multi-computer shared input devices) new to mac

    - by pedalpete
    I've just gotten a mac and the first thing I'm trying to do in setting it up for dev is to set-up synergy. I've run into countless issues, and am making progress, but that 'mac's just work' theory is getting tired really fast. I'd prefer running the mac as the server, as it's a desktop. why use my pc(laptop) as the server. So i downloaded synergy, and tried to edit the 'synergy.conf' file. First with textmate, then with apple script. Neither of these applications will open the file. Even after changing textmate to use plain text format and switching off a few other things to make it into a plain text editor. No good. Uh, isn't that what these script/text editors are for? Then I found SyneryKM which is a package to run synergy on osx. Took LOTS of fiddling, but I think I finally kinda got it figured out. I've got my pc & mac both running synergy. However, my pc will only connect to synergy using the ip address. No problem, however, my mac won't connect to itself as a synergy server using the IP address. If I use the ip address as a screen in the 'server configuration file', i get a 'Error: unknown screen name mymac'. I know this shouldn't be super complicated, and possibly not the place to find answers about synergy, but I couldn't find a better place, and there has been some talk of synergy here. Any chance somebody can help with this?

    Read the article

  • Unable to connect to APNS with java-apns

    - by Mac
    I've got a Java program running on a firewalled server that is intended to send push notifications to my iPhone app by using java-apns. Problem is, whenever I try to send a notification the library fails to connect to the APNS server. From the stack trace, it seems that when creating the required SSL connection, the connection is being refused at some point (a java.net.ConnectException with a detail message of "connection refused" is being thrown when the library calls SSLSocketFactory's createSocket method). It would not surprise me at all if the firewall is blocking the connection, but unfortunately as I do not manage the server I am unable to verify that that is indeed the case. The fact that the program works fine from my (non-firewalled) desktop seems to support the theory. My question is, does anyone know of any method by which I can find the root cause of the problem, and/or can anyone tell me what I should tell the server admin to change to get things to work (if it is indeed the firewall that's the problem)? For reference, the server is a Linux box and I'm using version 0.1.2 of java-apns.

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • Router slowing my connection?

    - by Roberto
    I have a Linksys WRT54G and I pay for a 12Mbps connection. I've been testing my connection using speedtest.net for many days and always get 8Mbps. I called the support and they told me to bypass the router and test. I did it and got 16Mbps (much more than I pay for), so I thought "this guy just changed my speed so can he blame my router", and he blamed it. But to my surprise, everytime I bypass the router I get 16Mbps and when I use the router I get 8Mbps. Is this guy trolling me somehow (configuring the VOIP-modem-stuff to different profiles depending o the MAC address connecting to it) or is my router a POS? How can I find out? I don't know what's the thing the router connects to, it's a kind of VOIP adapter; the link is this one, but unfortunately I don't think you'll understand because it's in Portuguese. I know they can remotely connect to it, that's the origin of my conspiracy theory :) I just tested wired to the router and got 10Mbps (and still 8Mbps on wifi and 16Mbps without router) O_o I'm 5cm away from my router, so no obstacles to interfere, right? ------ UPDATE ------- It's a WRT54G V8, I'm using firmware v8.00.7 (will install 8.00.8 tomorrow, but I saw that it's only a minor fix to UPnP denial of service security vulnerability). Results: IPerf LAN-LAN: 80Mbps IPerf LAN-WLAN: 19Mbps (therefore we can ignore wireless issues/settings) I wasn't able to make the (W)LAN-WAN NAT-enabled test with IPerf, I get a connection refused error. I'm not sure if did it right: ran in server mode, configured router to forward that port to my IP and tried to connect to my internet IP that got from this site. I don't think there is a way to disable NAT using this firmware. Question: Let's suppose it's an underpowered hardware issue. Is it right to assume that custom firmwares could resolve the issue because they are possibly better implemented and would make better use of the router resources? I couldn't find any references pointing to wired performance improvements with the use of custom firmware.

    Read the article

  • Using VLANs/subnetting to separate management from services?

    - by YouAreTheHat
    Background: I recently purchased a server and a managed switch for my home in the hopes of getting more experience and some fun toys to play with. The devices and appliances I either have or plan to have cover a broad spectrum: router, DD-WRT AP, Dell switch, OpenLDAP server, FreeRADIUS server, OpenVPN gateway, home PCs, gaming consoles, etc. I intend to segment my network with VLANs and associated subnets (e.g., VID10 is populated by devices on 192.168.10.0/24). The idea is to secure the more sensitive appliances by forcing traffic through my router/FW. Setup: After thinking and planning for some time, I have tentatively decided on 4 VLANs: one for the WAN connection, one for servers, one for home/personal devices, and one for management. In theory, the home VLAN will have limited access to the servers, and the management VLAN will be totally isolated for security. Question: Since I want to restrict access to management interfaces, but some appliances have to be accessible to other devices, is it possible/wise to have only management (SSH, HTTP, RDP) available on one VLAN/IP and only services (LDAP, DHCP, RADIUS, VPN) available on other? Is this a thing that is done? Does it gain me the security I think it does, or hurt me in some way?

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • Default browser hangs

    - by Craig Hinrichs
    Intermittent hangs would occur when I would use Internet Explorer to open a new main page or new tab to a site I know would be up. The browser would open and say "Waiting for site example.com" and do nothing more. If I closed the window and reopened it it would immediately connect. Over time I would have to close and reopen the window to get to the page. This would happen to any page, including Google. Got sick of it and started using Chrome. I recently upgraded my anti-virus and am now experiencing the same issue with Chrome. I use AVG for my antivirus. Empirically it seems that if I don't make Chrome my default browser I don't experience the issue. I tested this theory for over two hours yesterday. Possible issues I have found this could be but not confirmed yet: MTU settings are not correct. I am infected but my antivirus has not caught it (unlikely but possible) ?? I would like to think this is related to my antivirus but I am unsure how to verify. I don't like the idea of killing my antivirus if #2 is a possibility. I am looking for tips on how I can troubleshoot possible issues.

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • What characteristic of networking/TCP causes linear relation between TCP activity and latency?

    - by DeLongey
    The core of this problem is that our application uses websockets for real-time interfaces. We are testing our app in a new environment but strangely we're noticing an increasing delay in TCP websocket packets associated with an increase in websocket activity. For example, if one websocket event occurs without any other activity in a 1-minute period, the response from the server is instantaneous. However, if we slowly increase client activity the latency in server response increases with a linear relationship (each packet will take more time to reach the client with more activity). For those wondering this is NOT app-related since our logs show that our server is running and responding to requests in under 100ms as desired. The delay starts once the server processes the request and creates the TCP packet and sends it to the client (and not the other way around). Architecture This new environment runs with a Virtual IP address and uses keepalived on a load balancer to balance the traffic between instances. Two boxes sit behind the balancer and all traffic runs through it. Our host provider manages the balancer and we do not have control over that part of the architecture. Theory Could this somehow be related to something buffering the packets in the new environment? Thanks for your help.

    Read the article

  • Why would Windows use slower network interface despite route metrics?

    - by tim11g
    On my previous notebook, the Dell/Broadcom wireless adapter had an option to automatically disable wireless when a wired network is connected, so I never dealt with multiple active interfaces. My current system has an Intel wireless adapter, and they apparently haven't figured out how to turn it off when there is a wired connection. Unless I explicitly remember to disable wireless when docked, the connection is active. That shouldn't be a problem (in theory), since the route metric will cause traffic to go over the fastest network (as indicated by the lowest metric in the routing table). Apparently not - I'm running a backup and seeing the throughput at 25Mbps or so (which is consistent with 802.11g) when a perfectly good Gigabit Ethernet interface is also connected. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.104 10 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.109 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 Windows has correctly identified the Ethernet interface (.104) and assigned it the lower (preferred) metric. So the Ethernet interface should be used exclusively, right? Why is the Ethernet connection not being used? What other factors are involved? (This is with Windows 7 if it makes a difference)

    Read the article

  • Dual boot nt4 and windows 98

    - by ItFinallyWorks
    I am trying to dual boot nt4 and windows 98 se (don't laugh - old computer). I have seen Microsoft's instructions for doing this, but it limits windows 98 to have a Fat16 partition (NT4's NTLDR doesn't understand FAT32) and therefore only 2GB of disk space. I really need it to have more than that. I started with Win 98 (on the 1st partition), repartitioned the disk, then added NT4 on the 2nd partition. NT4 took over the bootloader (as expected), so NT4 boots, but Win 98 doesn't. Right now I am working in VMWare so I can use nonpersistent hard drives (IDE like the real computer) to recover from errors easily. I've tried using XPs NTLDR using the instructions here: http://www.nu2.nu/fixnt4/ , but I got weird errors from NT4 and it never really worked. If XP's NTLDR would work, that should be able to boot both OSes. I've also tried using GRUB. In theory that should work. In fact when booting from super grub disk, it does. But as soon as I install grub to disk, Win 98 boots, but NT 4 blue screens at boot with a 0x0000007b inaccessible_boot_device error (that can be alot of things see MS kb 822051). The incantation I'm using for GRUB 1 is rootnoverify (hd0,1) makeactive chainloader +1 boot So, anybody have some suggestions?

    Read the article

  • sql server 2008 cluster hang when a heavy load is run

    - by Billy OT
    we have a sql server 2008 active/active cluster running on wondows 2008R2 O/S. 14GB RAM, 4xCPU. we have set a ceiling of 12GB for sql server. We're running an agent job which loads 3 million records to a database. during this load the job fails and the cluster seems to attempt to fail over to the other node but unsuccessfully i.e., the cluster address is no longer accessible. we have to manually fail the cluster node back. during the load on viewing task manager we can see that memory usage hits a max of 12.5GB and CPU at times hits 100% on all 4 CPU, but for the most part fluctuates at an average of about 60%. I suppose my question is, will a cluster try to fail over if memory or CPU are taking a heavy hit? or am i barking up the wrong tree? also any ideas why it wouldn't fully fail over? we've crawled through logs, of which there are a lot, and can't find anything useful. we've also tried recreating the issue but it ran successfully at a later time. Also 3 million rows doesn't seem like a lot but in terms of resources should 14GB RAM and 4xCPU not be sufficient? Further information on this, we ran the load again today and corrupted the database! We received the error message : LogWriter: Operating system error 170. It looks like, under the heavy load, the sql cluster attempted to fail over and in doing so migrated a lun (or drive) which meant the disk was no longer reachable. (this is just our theory). The database is now 'suspect' and requiring restoration. The 170 error above also indicates that on failing over to the other node, the sql service could not start as it was already in use, therefore it couldn't fail over fully?? But I'm wondering why would it need to fail over in the first place? My assumptions could be completely wrong on this, so any ideas would be appreciated.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >