Search Results

Search found 31717 results on 1269 pages for 'response write'.

Page 13/1269 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Filtering content from response body HTML (mod_security or other WAFs)

    - by Bingo Star
    We have Apache on Linux with mod_security as the Web App Firewall (WAF) layer. To prevent content injections, we have some rules that basically disable a page containing some text patterns from showing up at all. For example, if an HTML page on webserver has slur words (because some webmaster may have copied/pasted text without proofreading) the Apache server throws a 406 error. Our requirement now is a little different: we would like to show the page as regular 200, but if such a pattern is matched, we want to strip out the offending content. Not block the entire page. If we had a server side technology we could easily code for this, but sadly this is for a website with 1000s of static html pages. Another solution might have been to do a cronjob of find/replace strings and run them on folders en-masse, maybe, but we don't have access to the file system in this case (different department). We do have control over WAF or Apache rules if any. Any pointers or creative ideas?

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Python response parse [migrated]

    - by Pavel Shevelyov
    When I'm sending some data on host: r = urllib2.Request(url, data = data, headers = headers) page = urllib2.urlopen(r) print page.read() I have something like this: [{"command":"settings","settings":{"basePath":"\/","ajaxPageState":{"theme":"spsr","theme_token":"kRHUhchUVpxAMYL8Y8IoyYIcX0cPrUstziAi8gSmMYk","css":[]},"ajax":{"edit-submit":{"callback":"spsr_calculator_form_ajax","wrapper":"calculator_form","method":"replaceWith","event":"mousedown","keypress":true,"url":"\/ru\/system\/ajax","submit":{"_triggering_element_name":"submit"}}}},"merge":true},{"command":"insert","method":null,"selector":null,"data":"\u003cdiv id=\"calculator_form\"\u003e\u003cform action=\"\/ru\/service\/calculator\" method=\"post\" id=\"spsr-calculator-form\" accept-charset=\"UTF-8\"\u003e\u003cdiv\u003e\u003cinput id=\"edit-from-ship-region-id\" type=\"hidden\" name=\"from_ship_region_id\" value=\"\" \/\u003e\n\u003cinput type=\"hidden\" name=\"form_build_id\" value=\"form-0RK_WFli4b2kUDTxpoqsGPp14B_0yf6Fz9x7UK-T3w8\" \/\u003e\n\u003cinput type=\"hidden\" name=\"form_id\" value=\"spsr_calculator_form\" \/\u003e\n\u003c\/div\u003e\n\u003cdiv class=\"bg_p\"\u003e \n\u0421\u0435\u0439\u0447\u0430\u0441 \u0412\u044b... bla bla bla but I want have something, like this: <html><h1>bla bla bla</h1></html> How can I do it?

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • ISA caching with no cache-related info in response header

    - by Mike M. Lin
    From the documentation, I can't figure out what criteria an ISA server uses to figure out if a cached file is valid when no cache-related info is in the response header. Let's say I got this header in my response on Thu, 13 Jan 2011 18:43:35 GMT: HTTP/1.1 200 OK Date: Thu, 13 Jan 2011 18:43:35 GMT Server: Apache/2.2.3 (Red Hat) Content-Language: en X-Powered-By: Servlet/2.5 JSP/2.1 Keep-Alive: timeout=15 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 There's no cache directive, no last-modified field, no expires field. How will the ISA server decide for how long to cache this response?

    Read the article

  • Invalid Html Response and JS Errors when you open your Application in Visual Studio 2013

    - by imran_ku07
     I was working on an application which uses Telerik controls. The application was working fine for a while. Suddenly, the application stopped working. I mean lot of my application pages becoming very very ugly. I found JavaScript errors on every Browser's console. When I check the page view-source, the generated HTML was messy and invalid. This was only happening with my local machine. If someone else on my network accesses my application pages, he will get the correct HTML and no JavaScript errors. My mind was blowing because the same page was generating invalid HTML(and JavaScript errors) when I access the page using a local browser but generate correct HTML(and no JavaScript errors) when someone else access my application page remotely. Then I realized that I the only change I made last was opening my application in Visual Studio 2013 RTM which I installed few days ago. I closed the Visual Studio 2013, everything work like a charm. Then I became100% sure that this is only happening due to new Visual Studio 2013 feature called Browser Link. I just open the application again and add this in web.config. Everything become fine Happy coding :)   <add key="vs:EnableBrowserLink" value="false" />

    Read the article

  • Big problem: Complete computer crash (no response when booted)

    - by Feratile
    I need your help: I installed Ubuntu 12.04 on my computer (I removed the complete system and installed Ubuntu instead) It said that the bootloader installation failed. So I chose the only partition that existed again. Installation succesful. Restart your computer. I did this. Then it went black when it rebooted (only a flickering stroke appears). No ISOs are executable. BIOS Boot Device Priority says: 1st Boot Device [SATA:PM-SAMSUNG HM] That was all I can say, please help me!

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Duplicate ping response when running Ubuntu as virtual machine (VMWare)

    - by Stonerain
    I have the following setup: My router - 192.168.0.1 My host computer (Windows 7) - 192.168.0.3 And Ubuntu is running as virtual machine on the host. VMWare network settings is Bridged mode. I've modified Ubuntu network settings in /etc/netowrk/interfaces, set the following config: iface eth0 inet static address 192.168.0.220 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 Internet works correctly, I can install packages. But it gets weird if I try to ping something I get this: PING belpak.by (193.232.248.80) 56(84) bytes of data. From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=250 time=17.0 ms 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=249 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=248 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=247 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=246 time=17.0 ms (DUP! ) ^CFrom 192.168.0.1 icmp_seq=2 Time to live exceeded --- belpak.by ping statistics --- 2 packets transmitted, 1 received, +4 duplicates, +6 errors, 50% packet loss, ti me 999ms rtt min/avg/max/mdev = 17.023/17.041/17.048/0.117 ms I think even more interesting are the results of pinging the router itself: stonerain@ubuntu:~$ ping 192.168.0.1 -c 1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. From 192.168.0.3: icmp_seq=1 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=6.64 ms --- 192.168.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 6.644/6.644/6.644/0.000 ms But if I set -c 2: ... 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=253 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) From 192.168.0.3: icmp_seq=2 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=2 ttl=254 time=7.87 ms --- 192.168.0.1 ping statistics --- 2 packets transmitted, 2 received, +256 duplicates, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 6.666/10.141/13.556/2.410 ms Pinging host machine on the other hand works absolutely correctly: no DUPs, no errors. What seems to be the problem and how can I fix it? Thank you.

    Read the article

  • Running Ubuntu Server from a USB key / thumb drive (being mindful of flash's write limitations)

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and a ZFS RAID-Z array for data. I settled on this configuration after trying and regretfully rejecting FreeNAS because the Logitech Media Server (LMS) software isn't available on the AMD64 flavour of this platform and because I think Debian/Ubuntu server is a better future-proof platform. I considered Open Media Vault, but concluded that it isn't quite yet ready for my purposes. That said, FreeNAS does include the option to run itself off a 2GB+ flash device like USB key or thumb drive. Apparently FreeNAS is mindful of the write limitations of flash devices and so creates virtual disks for running the OS, writing only the required configuration information back to flash. This would give me an extra data drive slot. Q: Can Ubuntu Server be configured sensibly to run off a flash device such as a USB key/thumb drive? If so, how? The write limitations of flash should be accounted for.

    Read the article

  • Can't write to disk

    - by nofacts
    I have seen questions similar to this, but the answers are either beyond me or the situation doesn't quite match mine. Would appreciate some direction. I recently installed Ubuntu 12.04 LTS. The OS is on a disk formatted as ext4. I added another disk to the system and formatted it as W95 FAT 32 (LBA) (0x0c). I did this because I am moving from Windows to Linux, will be needing to go back and forth with data for a while, and might need to move the disk to a Windows machine. There may have been a better format to use, but if so I didn't know any better. I was able to transfer data from an external drive to this FAT32 drive with no problem. Now, though, when I try to create a new folder or write a file to the disk I get a message that the disk is read-only. If I go to the properties, permissions for the disk, for Folder Access it says 'create and delete files'. If I try to change File Access underneath to 'read and write', either nothing happens or I get a message it can't be done. Thank you for any help.

    Read the article

  • New Write Flash SSDs and more disk trays

    - by Steve Tunstall
    In case you haven't heard, the Write SSDs the ZFSSA have been updated. Much faster now for the same price. Sweet. The new write-flash SSDs have a new part number of 7105026 , so make sure you order the right ones. It's important to note that you MUST be on code level 2011.1.4.0 or higher to use these. They have increased in IOPS from 6,000 to 11,000, and increased throughput from 200MB/s to 350MB/s.    Also, you can now add six SAS HBAs (up from 4) to the 7420, allowing one to have three SAS channels with 12 disk trays each, for a new total of 36 disk trays. With 3TB drives, that's 2.5 Petabytes. Is that enough for you? Make sure you add new cards to the correct slots. I've talked about this before, but here is the handy-dandy matrix again so you don't have to go find it. Remember the rules: You can have 6 of any one kind of card (like six 10GigE cards), but you only really get 8 slots, since you have two SAS cards no matter what. If you want more than 12 disk trays, you need two more SAS cards, so think about expansion later, too. In fact, if you are going to have two different speeds of drives, in other words you want to mix 15K speed and 7,200 speed drives in the same system, I would highly recommend two different SAS channels. So I would want four SAS cards in that system, no matter how many trays you have. 

    Read the article

  • SAML Request / Response decoding.

    - by Shawn Cicoria
    When you’re working with Web SSO integration, sometimes it’s helpful to be able to decode the tokens that get passed around via the browser from the various participants in the trust – RP, STS, etc. With SAML tokens, sometimes they’re simply base64 encoded when they’re in the POST body; other times they’re part of the query string, which they end up being base64encoded, deflated, then Url encoded. I always end up putting together some simple tool that does this for me – so, this is an effort to make this more permanent. It’s a simple WinForms application that is using NetFx 4.0. Download

    Read the article

  • Collision rectangle response

    - by dotty
    I'm having difficulties getting a moveable rectangle to collide with more than one rectangle. I'm using SFML and it has a handy function called Intersect() which takes 2 rectangles and returns the intersections. I have a vector full of rectangles which I want my moveable rectangle to collide with. I'm looping through this using the following code (p is the moveble rectangle). IsCollidingWith returns a bool but also uses SFML's Interesect to work out the intersections. while(unsigned i = 0; i!= testRects.size(); i++){ if(p.IsCollidingWith(testRects[i]){ p.Collide(testRects[i]); } } and the actual Collide() code void gameObj::collide( gameObj collidingObject ){ printf("%f %f\n", this->colliderResult.width, this->colliderResult.height); if (this->colliderResult.width < this->colliderResult.height) { // collided on X if (this->getCollider().left < collidingObject.getCollider().left ) { this->move( -this->colliderResult.width , 0); }else { this->move( this->colliderResult.width, 0 ); } } if(this->colliderResult.width > this->colliderResult.height){ if (this->getCollider().top < collidingObject.getCollider().top ) { this->move( 0, -this->colliderResult.height); }else { this->move( 0, this->colliderResult.height ); } } } and the IsCollidingWith() code is bool gameObj::isCollidingWith( gameObj testObject ){ if (this->getCollider().intersects( testObject.getCollider(), this->colliderResult )) { return true; }else { return false; } } This works fine when there's only 1 Rect in the scene. However, when there's move than one Rect it causes issue when working out 2 collisions at once. Any idea how to deal with this correctly? I have uploaded a video to youtube to show my problem. The console on the far-right shows the width and height of the intersections. You can see on the console that it's trying to calculate 2 collisions at once, I think this is where the problem is being caused. The youtube video is at http://www.youtube.com/watch?v=fA2gflOMcAk also , this image also seems to illustrate the problem nicely. Can someone please help, I've been stuck on this all weekend!

    Read the article

  • Response: Agile's Second Chasm

    - by Malcolm Anderson
    William Pietri over at Agile Focus has written an interesting article entitled, "Agile’s Second Chasm (and how we fell in)" in which he talks about how agile development has fallen into a common trap where large companies are now spending a lot of money hiring agile (Scrum) consultants just so that they can say they are agile, but all the while avoiding any change that is required by Scrum.   It echoes the questions that I've been asking for a while, "Can a fortune 500 company actually do agile development?"  I'm starting to think that the answer is "usually not"   William ask 3 questions at the end of his article that I will answer here.   1) Have I seen agile development brought in and then preemptively customized (read: made into ScrummerFall)?   Yes, Scrum is hard and disruptive.  It's a spotlight on company dysfunction.  In a low trust environment like most fortune 500 companies Scrum will be subverted by anyone who has ever seen "transparency" translate into someone being laid off.   2) If I had to do it all over again, would I change anything?  No, this is a natural progression, but the agile principles are powerful enough, that the companies that don't adopt them will no longer be competitive and will start to fail.   3) Is this situation solvable?  I think it is.  I think that one of the issues is that you often see companies implementing Scrum, but avoiding the agile engineering practices.  I believe that you cannot do one without the other.  Scrum keeps the ship sailing in smooth deep waters.  The agile engineering practices keep the engine running smoothly and cleanly.  If you implement agile engineering practices without Scrum, you run the risk of ending up with a great running piece of software that is useful to no one.  On the other hand, implementing cargo-cult Scrum without the agile engineering practices and you end up (especially in a fortune 500 company) being steered in the right direction, but with your development practices coming to a dead halt because you have code that can not keep up with the changes in requirements.   If you are trying to do Scrum, make sure that you hire some agile engineering coaches, or else you may find your deveolpment engines grinding to a dead halt in the middle of the open ocean.

    Read the article

  • Tuning WebServer Response -

    - by Vedran Wex Maricevic
    I have this sam e question on StackOverflow and I was advised to ask it here hoping for more information. Here is the question: I am in rather unfavorable situation. I have aspdotneststore front e-commerce application and search addon called VibeTrib. I dont have source code for both of those. Store that runs on StoreFront and VibeTrib has close to 250k products. Also we have lots of filters. I spoke to ViTrib reps, and they want extra money so they could optimize Queries that they use. Money they require is nto a big deal, but the problem is I dont trust them anymore. What we got is much different then wha is being advertised. To cut the long story short. I am runing the store on Amazon AWS now, and regardless of what DB (MsSQL 2012) server I set (I tried 32GB RAM monsters instances) it is slow. Ajax search uses Full Text search and it displays search keywords relatively fast, but once the search is performed ( to display all results) it is still slow.!!! There is something that I could to do accelerate the speed on my own end? I do have full control over EC2 Instance (Web server Server 2012 and IIS 8). Can I set IIS to step in for the search and cache some of it? I was hoping to cache at least some most common words. My best bet is IIS 8 :) Is there any help in my case? Thanks

    Read the article

  • Getting 2D Platformer entity collision Response Correct (side-to-side + jumping/landing on heads)

    - by jbrennan
    I've been working on a 2D (tile based) 2D platformer for iOS and I've got basic entity collision detection working, but there's just something not right about it and I can't quite figure out how to solve it. There are 2 forms of collision between player entities as I can tell, either the two players (human controlled) are hitting each other side-to-side (i. e. pushing against one another), or one player has jumped on the head of the other player (naturally, if I wanted to expand this to player vs enemy, the effects would be different, but the types of collisions would be identical, just the reaction should be a little different). In my code I believe I've got the side-to-side code working: If two entities press against one another, then they are both moved back on either side of the intersection rectangle so that they are just pushing on each other. I also have the "landed on the other player's head" part working. The real problem is, if the two players are currently pushing up against each other, and one player jumps, then at one point as they're jumping, the height-difference threshold that counts as a "land on head" is passed and then it registers as a jump. As a life-long player of 2D Mario Bros style games, this feels incorrect to me, but I can't quite figure out how to solve it. My code: (it's really Objective-C but I've put it in pseudo C-style code just to be simpler for non ObjC readers) void checkCollisions() { // For each entity in the scene, compare it with all other entities (but not with one it's already compared against) for (int i = 0; i < _allGameObjects.count(); i++) { // GameObject is an Entity GEGameObject *firstGameObject = _allGameObjects.objectAtIndex(i); // Don't check against yourself or any previous entity for (int j = i+1; j < _allGameObjects.count(); j++) { GEGameObject *secondGameObject = _allGameObjects.objectAtIndex(j); // Get the collision bounds for both entities, then see if they intersect // CGRect is a C-struct with an origin Point (x, y) and a Size (w, h) CGRect firstRect = firstGameObject.collisionBounds(); CGRect secondRect = secondGameObject.collisionBounds(); // Collision of any sort if (CGRectIntersectsRect(firstRect, secondRect)) { //////////////////////////////// // // // Check for jumping first (???) // // //////////////////////////////// if (firstRect.origin.y > (secondRect.origin.y + (secondRect.size.height * 0.7))) { // the top entity could be pretty far down/in to the bottom entity.... firstGameObject.didLandOnEntity(secondGameObject); } else if (secondRect.origin.y > (firstRect.origin.y + (firstRect.size.height * 0.7))) { // second entity was actually on top.... secondGameObject.didLandOnEntity.(firstGameObject); } else if (firstRect.origin.x > secondRect.origin.x && firstRect.origin.x < (secondRect.origin.x + secondRect.size.width)) { // Hit from the RIGHT CGRect intersection = CGRectIntersection(firstRect, secondRect); // The NUDGE just offsets either object back to the left or right // After the nudging, they are exactly pressing against each other with no intersection firstGameObject.nudgeToRightOfIntersection(intersection); secondGameObject.nudgeToLeftOfIntersection(intersection); } else if ((firstRect.origin.x + firstRect.size.width) > secondRect.origin.x) { // hit from the LEFT CGRect intersection = CGRectIntersection(firstRect, secondRect); secondGameObject.nudgeToRightOfIntersection(intersection); firstGameObject.nudgeToLeftOfIntersection(intersection); } } } } } I think my collision detection code is pretty close, but obviously I'm doing something a little wrong. I really think it's to do with the way my jumps are checked (I wanted to make sure that a jump could happen from an angle (instead of if the falling player had been at a right angle to the player below). Can someone please help me here? I haven't been able to find many resources on how to do this properly (and thinking like a game developer is new for me). Thanks in advance!

    Read the article

  • Web programming, standard way to deal with a response that takes time to complete

    - by wobbily_col
    With normal form submission I use the pattern Post / Redirect / Get, when processing the forms. I have a database application built with Django. I want to allow the users to select a number of items from the database, then launch a computationally intensive task based on those items. I expect the task to take between 10 minutes and 2 hours to complete. Is there a standard approach to dealing with requests like this (i.e. that don't return immediately)? Ideally there would be some way to display the progress.

    Read the article

  • Relating ping to perceived browser GUI response

    - by cvsdave
    We periodically get complaints of poor GUI (browser page) response that we need to explore. I am looking for a quick and cheap first check to see if the issue is network latency, or server performance. Has anyone encountered any discussion of ping time and perceived GUI response? I understand that GUI response is complicated, but it would be nice if we could find or develop a rule of thumb along the lines of "Hmmmm, ping is over 200, it might be network problems". Ideally, this lives in a script on the user's machine so that we can see the latency that they are seeing... (BASH, Linux). A reference to a good discussion page would be a fine answer, as would any recommendation of other source material.

    Read the article

  • SharePoint 2010 slow page response time suddenly !

    - by H(at)Ni
    Hello, One of my customers faced a problem that suddenly their SharePoint portal was loading extremely slower than usual. After some basic troubleshooting I did not find anything suspicious in the ULS logs, IIS logs or even Event logs. After that, I came to the part that I like most which is capturing a memory dump for the IIS process and analyzing the threads running. I searched for any common mistakes like looping a large list, calling a remote web service but couldn't find any. After a deep analysis of the memory dump (Which was done by an Escalation Engineer for SharePoint), it seems that the farm root certificate was missing and therefore was trying to validate it from the internet every time the user requests to load the page and this was the resolution http://support.microsoft.com/kb/2625048 Cheers,

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >