Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 527/863 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • Switched to new router and now experiencing lag?

    - by Mr_CryptoPrime
    I switched from a dynex-802.11b/g to a Netgear-802.11b/g/n just yesterday. My router is down stairs because my phoneline upstairs is retarded....but my PS3 is still upstairs (SOCOM: Confrontation is game I am experiencing issues). I have done everything I can to make sure the connection is solid and have checked the status and it has been as high as 80% and usually lingers at about 60%. I thought about upgrading my bandwidth from 1.5mbs to 7mbs, but I am guessing something is wrong if it worked fine before? Now the game seems more laggy and my voice chat is choppy. Others seem to receive my voice data fine because I can hear my own feedback clearly from other players (if you are in close proximity to another player and speak and there volume is loud enough sometimes you can hear yourself). I wonder if I port forward or setup DMZ then it will be fixed, but I am not sure and don't know quite how to do it. Has anyone else ever experienced this when switching routers? What did you do to fix it? Thanks!

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • Router as primary DNS server, Server as alternate? (or vice versa)

    - by Jakobud
    We have a very small business network, with a typical cable modem hooked into a DD-WRT router. We also run a basic CentOS server that does a variety of things, including acting as the primary DNS server for the office. The reason we need an internal DNS server is because we do a lot of internal web development and use the DNS server to add/remove various local network URLs for internal website testing (like www.testsite.com.local). It's very important for us to be able to add/remove URL aliases easily to the DNS. The problem with this setup is that if we ever need to restart the CentOS server or take it offline for upgrades or whatever, then internet access for all computers on the network is lost. That's because each computer relies on that DNS server to access the Internet I guess? The router is online all the time and very very rarely has to be restarted. It would be nice if we could setup my router to be the primary DNS server but still be running DNS on my server. So we could still add my local testing website URLs to the DNS server in CentOS, but be able to also take down the CentOS server without loosing Internet access on the network. How would this be setup? Would I simply need to add both router + server IP addresses to each computer's IP settings? Is the router primary DNS and server secondary DNS server? Or vice versa? Or can one of the two serve as a fallback for the other? What (if anything) needs to be configured on both the router and server in order for them to recognize that the other DNS server exists on the network? Does anyone have any newb-friendly resources for setting up something like this?

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • How do I find and kill a php loop (process)?

    - by Hoytman
    I have a php script that I have been developing which calls an external api within a loop. It is being tested on a VPS which is running LAMP on Debian. I noticed this morning that the api was not responding to my script. When I called the provider, they told me that my server had been calling the api 1000's of times per hour for the past 10 hours. I am assuming that a php script (which I have been working on the day before and testing on my VPS) entered into an infinite loop during one of the executions, and never came out (I have been testing it from the command prompt, and not over the web.) I have attempted to stop and start Apache, but the api support staff says that the calls are still coming in from my server address. How can I find and stop the process? Also, is there a possibility that the Apache stop/start solved the problem, but the api is still trying to sort through past calls? Please forgive me for not using my local test environment correctly. Edit: I do not know the process name, I need to discover the name (or pid) based on behavior.

    Read the article

  • Debian - Problems Unmounting External Hard Drives

    - by user331981
    I recently installed Debian Testing on a new laptop and I just noticed that I am having some issues with unmounting external hard drives. I am using Mate Desktop 1.8.1. With the 1st drive, if I right click on the drive and select “safely remove”: The drive unmounts, spins down, immediately spins back up an remounts. Unable to unmount. With the 2nd drive, if I right click on the drive and select “safely remove”: The drive unmounts but does not spin down. With the 3rd drive, if I right click on the drive and select “safely remove”: The drive unmounts but does not spin down, immediately spins back up but does not remount, and after 20 seconds, it spins down and stays that way. Behavior is the same on both USB 2.0 and USB 3.0 ports. On my last laptop, on which I also used Debian Testing + Mate desktop, the safe removal of drives worked out of the box and I never had an issue with it. The drives would unmount, spin down and stay that way. To remount the drive, one needed to unplug the device and plug it back in. I am unsure how to troubleshoot this issue and I am not sure if it is merely a matter of installing a “missing” package of editing a config file. Thank you in advance.

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • Too many connections to Sql Server 2008

    - by Luis Forero
    I have an application in C# Framework 4.0. Like many app this one connects to a data base to get information. In my case this database is SqlServer 2008 Express. The database is in my machine In my data layer I’m using Enterprise Library 5.0 When I publish my app in my local machine (App Pool Classic) Windows Professional IIS 7.5 The application works fine. I’m using this query to check the number of connections my application is creating when I’m testing it. SELECT db_name(dbid) as DatabaseName, count(dbid) as NoOfConnections, loginame as LoginName FROM sys.sysprocesses WHERE dbid > 0 AND db_name(dbid) = 'MyDataBase' GROUP BY dbid, loginame When I start testing the number of connection start growing but at some point the max number of connection is 26. I think that’s ok because the app works When I publish the app to TestMachine1 • XP Mode Virtual Machine (Windows XP Professional) • IIS 5.1 It works fine, the behavior is the same, the number of connections to the database increment to 24 or 26, after that they stay at that point no matter what I do in the application. The problem: When I publish to TestMachine2 (App Pool Classic) • Windows Server 2008 R2 • IIS 7.5 I start to test the application the number of connection to the database start to grow but this time they grow very rapidly and don’t stop growing at 24 or 26, the number of connections grow till the get to be 100 and the application stop working at that point. I have check for any difference on the publications, especially in Windows Professional and Windows Server and they seem with the same parameters and configurations. Any clues why this could be happening? , any suggestions?

    Read the article

  • IE8 page reload hangs

    - by Rod
    When a 7mb HTM file is first opened (by clicking on the file icon or using Open With), both IE8 and Firefox display this browser file quickly. After the file is closed, Firefox will reopen this file quickly, but IE8 appears to hang during the reopen. Clearing the IE cache does not help. However, IE will reopen the file quickly again only if the File/Open/Browse feature of the menu bar is used (clicking on the file icon can be used only once between computer reboots). Testing suggests that the problem relates to the number of HTML hyperlinks pointing to another part of the file. There are many hyperlinks, but they are not a problem during the first load of the document (between computer reboots). What needs to be fixed to avoid use of the workaround? Using Windows XP SP3 Update 6/23/12 - Controlled testing shows that the number of hyperlinks is not the problem. The way this large file is opened is the difference: 1) from the IE menu bar, File/Open/Browse is consistent and fast (but not as fast as FF). 2) clicking on the file name in the folder (even when IE is the default program for this file type) causes a much delayed load of the file. Creating a smaller file demonstrates the delayed load, but verifies that the load eventually occurs.

    Read the article

  • Sinatra and XML POST request

    - by user292815
    I don't know is it my mistake or no. So i have that code: <code> post '/singin/get_token' do content_type :xml puts request.body.read puts xmlRequest xmlRequest = REXML::Document.new(request.body.read) ... </code> And when i post something like that: <code> <?xml version="1.0" encoding="utf-16"?><request xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><username>adsfasdf</username></request> </code> I receive that in my console: <code> 127.0.0.1 - - [12/Mar/2010 21:18:20] "POST /singin/get_token HTTP/1.1" 500 105872 0.1339 Iconv::InvalidCharacter - ">": /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `conv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `decode_iconv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:58:in `encoding=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:46:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:164:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `create_from' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:146:in `stream=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:123:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `build' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' zaiaku-game-server.rb:70:in `new' zaiaku-game-server.rb:70:in `block in <main>' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run'Iconv::InvalidCharacter: ">" /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `conv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/encodings/ICONV.rb:7:in `decode_iconv' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:58:in `encoding=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:46:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:164:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/source.rb:17:in `create_from' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:146:in `stream=' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/baseparser.rb:123:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:9:in `initialize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `new' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:228:in `build' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' zaiaku-game-server.rb:70:in `new' zaiaku-game-server.rb:70:in `block in <main>' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:811:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:811:in `block in route' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:488:in `instance_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:488:in `route_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:477:in `block (2 levels) in route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:474:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:474:in `block in route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:453:in `each' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:453:in `route!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:569:in `dispatch!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:388:in `block in call!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `instance_eval' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `block in invoke' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:536:in `invoke' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:388:in `call!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:377:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:928:in `block in call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:973:in `synchronize' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:928:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:in `block in pre_process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in `catch' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in `pre_process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in `process' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:in `receive_data' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run_machine' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in `start' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:in `start' /Users/andoriyu/.gem/ruby/1.9.1/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/base.rb:896:in `run!' /Users/andoriyu/.homebrew/Cellar/ruby/1.9.1-p378/lib/ruby/gems/1.9.1/gems/sinatra-0.9.6/lib/sinatra/main.rb:35:in `block in <top (required)>' !! Unexpected error while processing request: incompatible character encodings: ASCII-8BIT and UTF-8 <code>

    Read the article

  • Create a class that inherets DrawableGameComponent in XNA as a CLASS with custom functions

    - by user3675013
    using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; namespace TileEngine { class Renderer : DrawableGameComponent { public Renderer(Game game) : base(game) { } SpriteBatch spriteBatch ; protected override void LoadContent() { base.LoadContent(); } public override void Draw(GameTime gameTime) { base.Draw(gameTime); } public override void Update(GameTime gameTime) { base.Update(gameTime); } public override void Initialize() { base.Initialize(); } public RenderTarget2D new_texture(int width, int height) { Texture2D TEX = new Texture2D(GraphicsDevice, width, height); //create the texture to render to RenderTarget2D Mine = new RenderTarget2D(GraphicsDevice, width, height); GraphicsDevice.SetRenderTarget(Mine); //set the render device to the reference provided //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. return Mine; //I'm hoping that this returns the same instance and not a copy. } public void draw_texture(int width, int height, RenderTarget2D Mine) { GraphicsDevice.SetRenderTarget(null); //Set the renderer to render to the backbuffer again Rectangle drawrect = new Rectangle(0, 0, width, height); //Set the rendering size to what we want spriteBatch.Begin(); //This uses spritebatch to draw the texture directly to the screen spriteBatch.Draw(Mine, drawrect, Color.White); //This uses the color white spriteBatch.End(); //ends the spritebatch //Call base.draw after this since it doesn't seem to recognize inside the function //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. } } } I solved a previous issue where I wasn't allowed to access GraphicsDevice outside the main Default 'main' class Ie "Game" or "Game1" etc. Now I have a new issue. FYi no one told me that it would be possible to use GraphicsDevice References to cause it to not be null by using the drawable class. (hopefully after this last bug is solved it doesn't still return null) Anyways at present the problem is that I can't seem to get it to initialize as an instance in my main program. Ie Renderer tileClipping; and I'm unable to use it such as it is to be noted i haven't even gotten to testing these two steps below but before it compiled but when those functions of this class were called it complained that it can't render to a null device. Which meant that the device wasn't being initialized. I had no idea why. It took me hours to google this. I finally figured out the words I needed.. which were "do my rendering in XNA in a seperate class" now I haven't used the addcomponent function because I don't want it to only run these functions automatically and I want to be able to call the custom ones. In a nutshell what I want is: *access to rendering targets and graphics device OUTSIDE default class *passing of Rendertarget2D (which contain textures and textures should automatically be passed with a rendering target? ) *the device should be passed to this function as well OR the device should be passed to this function as a byproduct of passing the rendertarget (which is automatically associated with the render device it was given originally) *I'm assuming I'm dealing with abstracted pointers here so when I pass a class object or instance, I should be recieving the SAME object , I referenced, and not a copy that has only the lifespan of the function running. *the purpose for all these options: I want to initialize new 2d textures on the fly to customize tileclipping and even the X , y Offsets of where a WHOLE texture will be rendered, and the X and Y offsets of where tiles will be rendered ON that surface. This is why. And I'll be doing region based lighting effects per tile or even per 8X8 pixel spaces.. we'll see I'll also be doing sprite rotations on the whole texture then copying it again to a circular masked texture, and then doing a second copy for only solid tiles for masked rotated collisions on sprites. I'll be checking the masked pixels for my collision, and using raycasting possibly to check for collisions on those areas. The sprite will stay in the center, when this rotation happens. Here is a detailed diagram: http://i.stack.imgur.com/INf9K.gif I'll be using texture2D for steps 4-6 I suppose for steps 1 as well. Now ontop of that, the clipping size (IE the sqaure rendered) will be able to be shrunk or increased, on a per frame basis Therefore I can't use the same static size for my main texture2d and I can't use just the backbuffer Or we get the annoying flicker. Also I will have multiple instances of the renderer class so that I can freely pass textures around as if they are playing cards (in a sense) layering them ontop of eachother, cropping them how i want and such. and then using spritebatch to simply draw them at the locations I want. Hopefully this makes sense, and yes I will be planning on using alpha blending but only after all tiles have been drawn.. The masked collision is important and Yes I am avoiding using math on the tile rendering and instead resorting to image manipulation in video memory which is WHY I need this to work the way I'm intending it to work and not in the default way that XNA seems to handle graphics. Thanks to anyone willing to help. I hate the code form offered, because then I have to rely on static presence of an update function. What if I want to kill that update function or that object, but have it in memory, but just have it temporarily inactive? I'm making the assumption here the update function of one of these gamecomponents is automatic ? Anyways this is as detailed as I can make this post hopefully someone can help me solve the issue. Instead of tell me "derrr don't do it this wayyy" which is what a few people told me (but they don't understand the actual goal I have in mind) I'm trying to create basically a library where I can copy images freely no matter the size, i just have to specify the size in the function then as long as a reference to that object exists it should be kept alive? right? :/ anyways.. Anything else? I Don't know. I understand object oriented coding but I don't understand this XNA It's beggining to feel impossible to do anything custom in it without putting ALL my rendering code into the draw function of the main class tileClipping.new_texture(GraphicsDevice, width, height) tileClipping.Draw_texture(...)

    Read the article

  • Software: Launching League of Legends spectator mode from Command Line (Mac)

    - by Alex Popov
    Background: tl;dr at the end League of Legends has a spectator mode, in which you can watch someone else's game (essentially a replay) with a 3 minute delay. Popular LoL website OP.GG has figured out a clever way of hosting these spectator games on their own servers, thereby making them replayable, as opposed to only being available while the game is on (as Riot does it). If you request a replay from OP.GG, it sends a batch file which looks for where the League is situated and then the magic happens: @start "" "League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1" This works fine on Windows. I'm trying to get it to work on Mac (which has an official client). First I tried running the same command by hand, (split for convenience) /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends 8393 LoLLauncher \ /Applications/ ... /LolClient spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1 Running this, however, just starts the LoLLauncher, which closes all the active League processes. The exactly same thing happens if I just call /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends Next I tried seeing what actually happens when Spectator mode is initiated so I ran $ ps -axf | grep -i lol which showed UID PID PPID C STIME TTY TIME CMD 503 3085 1 0 Wed02pm ?? 0:00.00 (LolClient) 503 24607 1 0 9:19am ?? 0:00.98 /Applications/League of Legends.app/Contents/LOL/RADS/system/UserKernel.app/Contents/MacOS/UserKernel updateandrun lol_launcher LoLLauncher.app 503 24610 24607 0 9:19am ?? 1:08.76 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_launcher/releases/0.0.0.122/deploy/LoLLauncher.app/Contents/MacOS/LoLLauncher 503 24611 24610 0 9:19am ?? 1:23.02 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient -runtime .\ -nodebug META-INF\AIR\application.xml .\ -- 8393 503 24927 24610 0 9:44am ?? 0:03.37 /Applications/League of Legends.app/Contents/LoL/RADS/solutions/lol_game_client_sln/releases/0.0.0.117/deploy/LeagueOfLegends.app/Contents/MacOS/LeagueofLegends 8394 LoLLauncher /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient spectator 216.133.234.17:8088 Yn1oMX/n3LpXNebibzUa1i3Z+s2HV0ul 1400781241 NA1 Of Interest: there is (LolClient) which I cannot kill by it's PID. UserKernel updateandrun lol_launcher LoLLauncher.app is launched first. LoLLauncher is launched by the UserKernel (as we can see from the PPID) The very long command (PID: 24927) is how Spectator mode is launched, and is also launched by UserKernel. Spectator mode is launched in exactly the same way that the OP.GG .bat wanted to, with the only difference that Spectator mode connects to Riot instead of OP.GG's spectate server. I tried attaching GDB to the LolClient, but I couldn't get anything meaningful from it since it's an Adobe AIR application (and I've never used GDB with code other than mine own). Next I ran dtruss -a -b 100m -f -p $PID on everything I could think of: the LolClient, the LolLauncher and the UserKernel and skimmed the half a million lines produced. I found stuff like the GET request used to get the information of the game to spectate, but I could not see any launch of the equivalent of League of Legends.exe with spectator options. Finally, I ran lsof | grep -i lol to see if anything else was opened in the process, but didn't find anything that seemed appropriate. Open were UserKernel, LolLauncher, LolClient, Adobe AIR, LeagueofLegends and then Bugsplat, all of which are expected. None of this seemed especially relevant to figuring out how LeagueofLegends was opened into spectator mode. It obviously can be done, since Spectator mode is accessible from within the client. It seems likely that it can be done from the CLI, since Windows can do it and the clients are supposed to equals. Unless I'm missing something in the difference between how UNIX and Windows handle CLI application launches. My question is if there are any other things I can try to figure out how to launch Spectator mode myself. tl;dr: Trying to get into spectator mode from the CLI. It's possible on Windows (see first code block) but it just restarts League on Mac. What else can I try to find what call is made, and how to reproduce it? PS: Please let me know how I can improve this question or its formatting, I'd love to use StackOverflow/SuperUser, but as the guys said on the podcast this week (Ep. 59) it's very intimidating. Sorry for posting this on StackOverflow the first time :(

    Read the article

  • TemplateBinding with Converter - what is wrong?

    - by MartyIX
    I'm creating a game desk. I wanted to specify field size (one field is a square) as a attached property and with this data set value of ViewPort which would draw 2x2 matrix (and tile mode would do the rest of game desk). I'm quite at loss what is wrong because the binding doesn't work. Testing line in XAML for the behaviour I would like to have: <DrawingBrush Viewport="0,0,100,100" ViewportUnits="Absolute" TileMode="None"> The game desk is based on this sample of DrawingPaint: http://msdn.microsoft.com/en-us/library/aa970904.aspx (an image is here) XAML: <Window x:Class="Sokoban.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:Sokoban" Title="Window1" Height="559" Width="419"> <Window.Resources> <local:FieldSizeToRectConverter x:Key="fieldSizeConverter" /> <Style x:Key="GameDesk" TargetType="{x:Type Rectangle}"> <Setter Property="local:GameDeskProperties.FieldSize" Value="50" /> <Setter Property="Fill"> <Setter.Value> <!--<DrawingBrush Viewport="0,0,100,100" ViewportUnits="Absolute" TileMode="None">--> <DrawingBrush Viewport="{TemplateBinding local:GameDeskProperties.FieldSize, Converter={StaticResource fieldSizeConverter}}" ViewportUnits="Absolute" TileMode="None"> <DrawingBrush.Drawing> <DrawingGroup> <GeometryDrawing Brush="CornflowerBlue"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,100,100" /> </GeometryDrawing.Geometry> </GeometryDrawing> <GeometryDrawing Brush="Azure"> <GeometryDrawing.Geometry> <GeometryGroup> <RectangleGeometry Rect="0,0,50,50" /> <RectangleGeometry Rect="50,50,50,50" /> </GeometryGroup> </GeometryDrawing.Geometry> </GeometryDrawing> </DrawingGroup> </DrawingBrush.Drawing> </DrawingBrush> </Setter.Value> </Setter> </Style> </Window.Resources> <StackPanel> <Rectangle Style="{StaticResource GameDesk}" Width="300" Height="150" /> </StackPanel> </Window> Converter and property definition: using System; using System.Collections.Generic; using System.Text; using System.Windows.Controls; using System.Windows; using System.Diagnostics; using System.Windows.Data; namespace Sokoban { public class GameDeskProperties : Panel { public static readonly DependencyProperty FieldSizeProperty; static GameDeskProperties() { PropertyChangedCallback fieldSizeChanged = new PropertyChangedCallback(OnFieldSizeChanged); PropertyMetadata fieldSizeMetadata = new PropertyMetadata(50, fieldSizeChanged); FieldSizeProperty = DependencyProperty.RegisterAttached("FieldSize", typeof(int), typeof(GameDeskProperties), fieldSizeMetadata); } public static int GetFieldSize(DependencyObject target) { return (int)target.GetValue(FieldSizeProperty); } public static void SetFieldSize(DependencyObject target, int value) { target.SetValue(FieldSizeProperty, value); } static void OnFieldSizeChanged(DependencyObject target, DependencyPropertyChangedEventArgs e) { Debug.WriteLine("FieldSize just changed: " + e.NewValue); } } [ValueConversion(/* sourceType */ typeof(int), /* targetType */ typeof(Rect))] public class FieldSizeToRectConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Debug.Assert(targetType == typeof(int)); int fieldSize = int.Parse(value.ToString()); return new Rect(0, 0, 2 * fieldSize, 2 * fieldSize); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // should not be called in our example throw new NotImplementedException(); } } }

    Read the article

  • optimizing iPhone OpenGL ES fill rate

    - by NateS
    I have an Open GL ES game on the iPhone. My framerate is pretty sucky, ~20fps. Using the Xcode OpenGL ES performance tool on an iPhone 3G, it shows: Renderer Utilization: 95% to 99% Tiler Utilization: ~27% I am drawing a lot of pretty large images with a lot of blending. If I reduce the number of images drawn, framerates go from ~20 to ~40, though the performance tool results stay about the same (renderer still maxed). I think I'm being limited by the fill rate of the iPhone 3G, but I'm not sure. My questions are: How can I determine with more granularity where the bottleneck is? That is my biggest problem, I just don't know what is taking all the time. If it is fillrate, is there anything I do to improve it besides just drawing less? I am using texture atlases. I have tried to minimize image binds, though it isn't always possible (drawing order, not everything fits on one 1024x1024 texture, etc). Every frame I do 10 image binds. This seem pretty reasonable, but I could be mistaken. I'm using vertex arrays and glDrawArrays. I don't really have a lot of geometry. I can try to be more precise if needed. Each image is 2 triangles and I try to batch things were possible, though often (maybe half the time) images are drawn with individual glDrawArrays calls. Besides the images, I have ~60 triangles worth of geometry being rendered in ~6 glDrawArrays calls. I often glTranslate before calling glDrawArrays. Would it improve the framerate to switch to VBOs? I don't think it is a huge amount of geometry, but maybe it is faster for other reasons? Are there certain things to watch out for that could reduce performance? Eg, should I avoid glTranslate, glColor4g, etc? I'm using glScissor in a 3 places per frame. Each use consists of 2 glScissor calls, one to set it up, and one to reset it to what it was. I don't know if there is much of a performance impact here. If I used PVRTC would it be able to render faster? Currently all my images are GL_RGBA. I don't have memory issues. Here is a rough idea of what I'm drawing, in this order: 1) Switch to perspective matrix. 2) Draw a full screen background image 3) Draw a full screen image with translucency (this one has a scrolling texture). 4) Draw a few sprites. 5) Switch to ortho matrix. 6) Draw a few sprites. 7) Switch to perspective matrix. 8) Draw sprites and some other textured geometry. 9) Switch to ortho matrix. 10) Draw a few sprites (eg, game HUD). Steps 1-6 draw a bunch of background stuff. 8 draws most of the game content. 10 draws the HUD. As you can see, there are many layers, some of them full screen and some of the sprites are pretty large (1/4 of the screen). The layers use translucency, so I have to draw them in back-to-front order. This is further complicated by needing to draw various layers in ortho and others in perspective. I will gladly provide additional information if reqested. Thanks in advance for any performance tips or general advice on my problem!

    Read the article

  • Inherited varibles are not reading correctly when using bitwise comparisons

    - by Shawn B
    Hey, I have a few classes set up for a game, with XMapObject as the base, and XEntity, XEnviron, and XItem inheriting it. MapObjects have a number of flags, one of them being MAPOBJECT_SOLID. My problem is, that XEntity is the only class that correctly detects MAPOBJECT_SOLID. Both Items are Environs are always considered solid by the game, regardless of the flag's state. What is important, is that Environs and Item should almost never be solid. Here are the relevent code samples: XMapObject: class XMapObject : public XObject { public: Uint8 MapObjectType,Location[2],MapObjectFlags; XMapObject *NextMapObject,*PrevMapObject; XMapObject(); void CreateMapObject(Uint8 MapObjectType); void SpawnMapObject(Uint8 MapObjectLocation[2]); void RemoveMapObject(); void DeleteMapObject(); void MapObjectSetLocation(Uint8 Y,Uint8 X); void MapObjectMapLink(); void MapObjectMapUnlink(); }; XEntity: class XEntity : public XMapObject { public: Uint8 Health,EntityFlags; float Speed,Time; XEntity *NextEntity,*PrevEntity; XItem *IventoryList; XEntity(); void CreateEntity(Uint8 EntityType,Uint8 EntityLocation[2]); void DeleteEntity(); void EntityLink(); void EntityUnlink(); Uint8 MoveEntity(Uint8 YOffset,Uint8 XOffset); }; XEnviron: class XEnviron : public XMapObject { public: Uint8 Effect,TimeOut; void CreateEnviron(Uint8 Type,Uint8 Y,Uint8 X,Uint8 TimeOut); }; XItem: class XItem : public XMapObject { public: void CreateItem(Uint8 Type,Uint8 Y,Uint8 X); }; And lastly, the entity move code. Only entities are capable of moving themselves. Uint8 XEntity::MoveEntity(Uint8 YOffset,Uint8 XOffset) { Uint8 NewY = Location[0] + YOffset, NewX = Location[1] + XOffset; if((NewY >= 0 && NewY < MAPY) && (NewX >= 0 && NewX < MAPX)) { XTile *Tile = GetTile(NewY,NewX); if(Tile->MapList != NULL) { XMapObject *MapObject = Tile->MapList; while(MapObject != NULL) { if(MapObject->MapObjectFlags & MAPOBJECT_SOLID) { printf("solid\n"); return 0; } MapObject = MapObject->NextMapObject; } } if(Tile->Flags & TILE_SOLID && EntityFlags & ENTITY_CLIPPING) { return 0; } this->MapObjectSetLocation(NewY,NewX); return 1; } return 0; } What is wierd, is that the bitwise operator always returns true when the MapObject is an Environ or an Item, but it works correctly for Entities. For debug I am using the printf "Solid", and also a printf containing the value of the flag for both Environs and Items. Any help is greatly appreciated, as this is a major bug for the small game I am working on.

    Read the article

  • Inherited variables are not reading correctly when using bitwise comparisons

    - by Shawn B
    Hey, I have a few classes set up for a game, with XMapObject as the base, and XEntity, XEnviron, and XItem inheriting it. MapObjects have a number of flags, one of them being MAPOBJECT_SOLID. My problem is that XEntity is the only class that correctly detects MAPOBJECT_SOLID. Both Items are Environs are always considered solid by the game, regardless of the flag's state. What is important is that Environs and Item should almost never be solid. Here are the relevent code samples: XMapObject: class XMapObject : public XObject { public: Uint8 MapObjectType,Location[2],MapObjectFlags; XMapObject *NextMapObject,*PrevMapObject; XMapObject(); void CreateMapObject(Uint8 MapObjectType); void SpawnMapObject(Uint8 MapObjectLocation[2]); void RemoveMapObject(); void DeleteMapObject(); void MapObjectSetLocation(Uint8 Y,Uint8 X); void MapObjectMapLink(); void MapObjectMapUnlink(); }; XEntity: class XEntity : public XMapObject { public: Uint8 Health,EntityFlags; float Speed,Time; XEntity *NextEntity,*PrevEntity; XItem *IventoryList; XEntity(); void CreateEntity(Uint8 EntityType,Uint8 EntityLocation[2]); void DeleteEntity(); void EntityLink(); void EntityUnlink(); Uint8 MoveEntity(Uint8 YOffset,Uint8 XOffset); }; XEnviron: class XEnviron : public XMapObject { public: Uint8 Effect,TimeOut; void CreateEnviron(Uint8 Type,Uint8 Y,Uint8 X,Uint8 TimeOut); }; XItem: class XItem : public XMapObject { public: void CreateItem(Uint8 Type,Uint8 Y,Uint8 X); }; And lastly, the entity move code. Only entities are capable of moving themselves. Uint8 XEntity::MoveEntity(Uint8 YOffset,Uint8 XOffset) { Uint8 NewY = Location[0] + YOffset, NewX = Location[1] + XOffset; if((NewY >= 0 && NewY < MAPY) && (NewX >= 0 && NewX < MAPX)) { XTile *Tile = GetTile(NewY,NewX); if(Tile->MapList != NULL) { XMapObject *MapObject = Tile->MapList; while(MapObject != NULL) { if(MapObject->MapObjectFlags & MAPOBJECT_SOLID) { printf("solid\n"); return 0; } MapObject = MapObject->NextMapObject; } } if(Tile->Flags & TILE_SOLID && EntityFlags & ENTITY_CLIPPING) { return 0; } this->MapObjectSetLocation(NewY,NewX); return 1; } return 0; } What is wierd, is that the bitwise operator always returns true when the MapObject is an Environ or an Item, but it works correctly for Entities. For debug I am using the printf "Solid", and also a printf containing the value of the flag for both Environs and Items. Any help is greatly appreciated, as this is a major bug for the small game I am working on.

    Read the article

  • JRuby wrong element type class java.lang.String(array contains char) related to JAVA_HOME

    - by Daryl
    I am on Ubuntu x64 bit running: java version "1.6.0_18" OpenJDK Runtime Environment (IcedTea6 1.8) (6b18-1.8-0ubuntu1) OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode) and jruby 1.4.0 (ruby 1.8.7 patchlevel 174) (2010-02-11 6586) (OpenJDK 64-Bit Server VM 1.6.0_18) [amd64-java] I have this code running on my Windows 7 computer at home. I recently copied over my whole folder over to Ubuntu, installed java, jruby, and associated gems but I get this error when I run my main file: jruby run.rb test =================Processing FREDERICKSBURG_1.1======================= ERROR IN TESTING wrong element type class java.lang.String(array contains char) /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' /home/daryl/Desktop/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' /home/daryl/Desktop/work/Code/geografikos/lib/models/page.rb:103:in `sentences' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `each' /home/daryl/Desktop/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 The focus of the error is: ERROR IN TESTING wrong element type class java.lang.String(array contains char) Everything works fine on my windows machine. I figured I was getting this error because I did not have JAVA_HOME set however I added this to bashrc as: export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk and have confirmed: echo $JAVA_HOME /usr/lib/jvm/java-1.6.0-openjdk I can produce a similar error by removing my JAVA_HOME variable on windows: =================Processing FREDERICKSBURG_1.3======================= ERROR IN TESTING cannot convert instance of class org.jruby.RubyString to char C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `to_java' C:/work/Code/geografikos/lib/sentence_splitter/splitter.rb:21:in `split' C:/work/Code/geografikos/lib/models/page.rb:103:in `sentences' C:/work/Code/geografikos/lib/extractor/lingpipe_svm.rb:34:in `extract' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:9:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:8:in `process' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `each' C:/work/Code/geografikos/lib/extractor/geo_controller.rb:6:in `process' C:/work/Code/geografikos/lib/statistics.rb:111:in `generate_all' C:/work/Code/geografikos/lib/statistics.rb:105:in `each' C:/work/Code/geografikos/lib/statistics.rb:105:in `generate_all' run.rb:56 It is obviously not exactly the same but I have a feeling this has to do with the java path. You can probably derive from the error that I am just trying to convert a ruby variable to java using to_java. This works fine on my windows machine and I have confirmed the gems are the same but I don't think this has to do with gems. I lied. I changed my JAVA_HOME back on my windows machine and this error still occurs. So now the code doesn't run on either machine. I recently installed git on my windows machine and added the code to a repository. But I haven't really done anything with it. All it said was it will convert all LF to CRLF...That shouldn't change anything though should it? Any ideas on why I am now getting these errors? I haven't changed anything on my windows machine in months except for installing git.

    Read the article

  • Custom event loop and UIKit controls. What extra magic Apple's event loop does?

    - by tequilatango
    Does anyone know or have good links that explain what iPhone's event loop does under the hood? We are using a custom event loop in our OpenGL-based iPhone game framework. It calls our game rendering system, calls presentRenderbuffer and pumps events using CFRunLoopRunInMode. See the code below for details. It works well when we are not using UIKit controls (as a proof, try Facetap, our first released game). However, when using UIKit controls, everything almost works, but not quite. Specifically, scrolling of UIKit controls doesn't work properly. For example, let's consider following scenario. We show UIImagePickerController on top of our own view. UIImagePickerController covers our custom view We also pause our own rendering, but keep on using the custom event loop. As said, everything works, except scrolling. Picking photos works. Drilling down to photo albums works and transition animations are smooth. When trying to scroll photo album view, the view follows your finger. Problem: when scrolling, scrolling stops immediately after you lift your finger. Normally, it continues smoothly based on the speed of your movement, but not when we are using the custom event loop. It seems that iPhone's event loop is doing some magic related to UIKit scrolling that we haven't implemented ourselves. Now, we can get UIKit controls to work just fine and dandy together with our own system by using Apple's event loop and calling our own rendering via NSTimer callbacks. However, I'd still like to understand, what is possibly happening inside iPhone's event loop that is not implemented in our custom event loop. - (void)customEventLoop { OBJC_METHOD; float excess = 0.0f; while(isRunning) { animationInterval = 1.0f / openGLapp->ticks_per_second(); // Calculate the target time to be used in this run of loop float wait = max(0.0, animationInterval - excess); Systemtime target = Systemtime::now().after_seconds(wait); Scope("event loop"); NSAutoreleasePool* pool = [[ NSAutoreleasePool alloc] init]; // Call our own render system and present render buffer [self drawView]; // Pump system events [self handleSystemEvents:target]; [pool release]; excess = target.seconds_to_now(); } } - (void)drawView { OBJC_METHOD; // call our own custom rendering bool bind = openGLapp->app_render(); // bind the buffer to be THE renderbuffer and present its contents if (bind) { opengl::bind_renderbuffer(renderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } } - (void) handleSystemEvents:(Systemtime)target { OBJC_METHOD; SInt32 reason = 0; double time_left = target.seconds_since_now(); if (time_left <= 0.0) { while((reason = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, TRUE)) == kCFRunLoopRunHandledSource) {} } else { float dt = time_left; while((reason = CFRunLoopRunInMode(kCFRunLoopDefaultMode, dt, FALSE)) == kCFRunLoopRunHandledSource) { double time_left = target.seconds_since_now(); if (time_left <= 0.0) break; dt = (float) time_left; } } }

    Read the article

  • Design Question - how do you break the dependency between classes using an interface?

    - by Seth Spearman
    Hello, I apologize in advance but this will be a long question. I'm stuck. I am trying to learn unit testing, C#, and design patterns - all at once. (Maybe that's my problem.) As such I am reading the Art of Unit Testing (Osherove), and Clean Code (Martin), and Head First Design Patterns (O'Reilly). I am just now beginning to understand delegates and events (which you would see if you were to troll my SO questions of recent). I still don't quite get lambdas. To contextualize all of this I have given myself a learning project I am calling goAlarms. I have an Alarm class with members you'd expect (NextAlarmTime, Name, AlarmGroup, Event Trigger etc.) I wanted the "Timer" of the alarm to be extensible so I created an IAlarmScheduler interface as follows... public interface AlarmScheduler { Dictionary<string,Alarm> AlarmList { get; } void Startup(); void Shutdown(); void AddTrigger(string triggerName, string groupName, Alarm alarm); void RemoveTrigger(string triggerName); void PauseTrigger(string triggerName); void ResumeTrigger(string triggerName); void PauseTriggerGroup(string groupName); void ResumeTriggerGroup(string groupName); void SetSnoozeTrigger(string triggerName, int duration); void SetNextOccurrence (string triggerName, DateTime nextOccurrence); } This IAlarmScheduler interface define a component that will RAISE an alarm (Trigger) which will bubble up to my Alarm class and raise the Trigger Event of the alarm itself. It is essentially the "Timer" component. I have found that the Quartz.net component is perfectly suited for this so I have created a QuartzAlarmScheduler class which implements IAlarmScheduler. All that is fine. My problem is that the Alarm class is abstract and I want to create a lot of different KINDS of alarm. For example, I already have a Heartbeat alarm (triggered every (int) interval of minutes), AppointmentAlarm (triggered on set date and time), Daily Alarm (triggered every day at X) and perhaps others. And Quartz.NET is perfectly suited to handle this. My problem is a design problem. I want to be able to instantiate an alarm of any kind without my Alarm class (or any derived classes) knowing anything about Quartz. The problem is that Quartz has awesome factories that return just the right setup for the Triggers that will be needed by my Alarm classes. So, for example, I can get a Quartz trigger by using TriggerUtils.MakeMinutelyTrigger to create a trigger for the heartbeat alarm described above. Or TriggerUtils.MakeDailyTrigger for the daily alarm. I guess I could sum it up this way. Indirectly or directly I want my alarm classes to be able to consume the TriggerUtils.Make* classes without knowing anything about them. I know that is a contradiction, but that is why I am asking the question. I thought about putting a delegate field into the alarm which would be assigned one of these Make method but by doing that I am creating a hard dependency between alarm and Quartz which I want to avoid for both unit testing purposes and design purposes. I thought of using a switch for the type in QuartzAlarmScheduler per here but I know it is bad design and I am trying to learn good design. If I may editorialize a bit. I've decided that coding (predefined) classes is easy. Design is HARD...in fact, really hard and I am really fighting feeling stupid right now. I guess I want to know if you really smart people took a while to really understand and master this stuff or should I feel stupid (as I do) because I haven't grasped it better in the couple of weeks/months I have been studying. You guys are awesome and thanks in advance for your answers. Seth

    Read the article

  • Dispatcher Timer Problem

    - by will
    I am trying to make a game in silverlight that also has widgets in it. To do this I am using a dispatcher timer running a game loop that updates graphics etc. In this I have a variable that has to be accessed by both by the constantly running game loop and UI event code. At first look it seemed that the gameloop had its own local copy of currentUnit (the variable), despite the variable being declared globally. I am trying to update currentUnit with an event by the widget part of the app, but the timer's version of the variable is not being updated. What can I do get the currentUnit in the gameloop loop to be updated whenever I update currentUnit via a click event? Here is the code for setting currentUnit as part of a click event DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(Unit)); currentUnit = serializer.ReadObject(e.Result) as Unit; txtName.Text = currentUnit.name; Canvas.SetLeft(txtName, 100 - (int)Math.Ceiling(txtName.ActualWidth) / 2); txtX.Text = "" + currentUnit.x; txtY.Text = "" + currentUnit.y; txtX.Text = "" + currentUnit.owner; txtY.Text = "" + currentUnit.moved; txtName.Text = "" + currentUnit.GetHashCode(); And here is a snippet from the gameLoop loop //deal with phase changes and showing stuff if (txtPhase.Text == "Move" && movementPanel.Visibility == Visibility.Collapsed) { if (currentUnit != null) { if (currentUnit.owner) { if (currentUnit.moved) { txtMoved.Text = "This Unit has Already Moved!"; movementPanel.Visibility = Visibility.Collapsed; } else { txtMoved.Text = "" + currentUnit.GetHashCode(); movementPanel.Visibility = Visibility.Visible; } } else { txtMoved.Text = "bam"; movementPanel.Visibility = Visibility.Collapsed; } } else { txtMoved.Text = "slam"; movementPanel.Visibility = Visibility.Collapsed; } //loadUnitList(); } Here is the code for my unit class. using System; public class Unit { public int id { get; set; } public string name { get; set; } public string image { get; set; } public int x { get; set; } public int y { get; set; } public bool owner { get; set; } public int rotation { get; set; } public double movement { get; set; } public string type { get; set; } public bool moved { get; set; } public bool fired { get; set; } } Overall, any simple types, like a double is being 'updated' correctly, yet a complex of my own type (Unit) seems to be holding a local copy. Please help, I've asked other places and no one has had an answer for me!

    Read the article

  • Android: Having trouble creating a subclass of application to share data with multiple Activities

    - by Mike
    Hello, I just finished a couple of activities in my game and now I was going to start to wire them both up to use real game data, instead of the test data I was using just to make sure each piece worked. Since multiple Activities will need access to this game data, I started researching the best way to pass this data to my Activities. I know about using putExtra with intents, but my GameData class has quite a bit of data and not just simple key value pairs. Besides quite a few basic data types, it also has large arrays. I didn't really want to try and pass all that, unless I can pass the entire object, instead of just key/data pairs. I read the following post and thought it would be the way to go, but so far, I haven't got it to work. Android: How to declare global variables? I created a simple test app to try this method out, but it keeps crashing and my code seems to look the same as in the post above - except I changed the names. Here is the error I am getting. Can someone help me understand what I am doing wrong? 12-23 00:50:49.762: ERROR/AndroidRuntime(608): Caused by: java.lang.ClassCastException: android.app.Application It crashes on the following statement: GameData newGameData = ((GameData)getApplicationContext()); Here is my code: package mrk.examples.StaticGameData; import android.app.Application; public class GameData extends Application { private int intTest; GameData () { intTest = 0; } public int getIntTest(){ return intTest; } public void setIntTest(int value){ intTest = value; } } // My main activity package mrk.examples.StaticGameData; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.util.Log; public class StaticGameData extends Activity { int intStaticTest; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); GameData newGameData = ((GameData)getApplicationContext()); newGameData.setIntTest(0); intStaticTest = newGameData.getIntTest(); Log.d("StaticGameData", "Well: IntStaticTest = " + intStaticTest); newGameData.setIntTest(1); Log.d("StaticGameData", "Well: IntStaticTest = " + intStaticTest + " newGameData: " + newGameData.getIntTest()); Intent intentNew = new Intent(this, PassData2Activity.class); startActivity (intentNew); } } // My test Activity to see if it can access the data and its previous state from the last activity package mrk.examples.StaticGameData; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class PassData2Activity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); GameData gamedataPass = ((GameData)getApplicationContext()); Log.d("PassData2Activity", "IntTest = " + gamedataPass.getIntTest()); } } Below is the relevant portion of my manifest: <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".StaticGameData" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".PassData2Activity"></activity> </application> <application android:name=".GameData" android:icon="@drawable/icon" android:label="@string/app_name"> </application> Thanks in advance for helping me understand why this code is crashing. Also, if you think this is just the wrong approach to let multiple activities have access to the same data, please give your suggestion. Please keep in mind that I am talking about quite a few variables and some large arrays.

    Read the article

  • The type or namespace cannot be found (are you missing a using directive or an assembly reference?)

    - by Kumu
    I get the following error when I try to compile my C# program: The type or namespace name 'Login' could not be found (are you missing a using directive or an assembly reference?) using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace FootballLeague { public partial class MainMenu : Form { FootballLeagueDatabase footballLeagueDatabase; Game game; Team team; Login login; //Error here public MainMenu() { InitializeComponent(); changePanel(1); } public MainMenu(FootballLeagueDatabase footballLeagueDatabaseIn) { InitializeComponent(); footballLeagueDatabase = footballLeagueDatabaseIn; } private void Form_Loaded(object sender, EventArgs e) { } private void gameButton_Click(object sender, EventArgs e) { int option = 0; changePanel(option); } private void scoreboardButton_Click(object sender, EventArgs e) { int option = 1; changePanel(option); } private void changePanel(int optionIn) { gamePanel.Hide(); scoreboardPanel.Hide(); string title = "Football League System"; switch (optionIn) { case 0: gamePanel.Show(); this.Text = title + " - Game Menu"; break; case 1: scoreboardPanel.Show(); this.Text = title + " - Display Menu"; break; } } private void logoutButton_Click(object sender, EventArgs e) { login = new Login(); login.Show(); this.Hide(); } Login.cs class: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace FootballLeagueSystem { public partial class Login : Form { MainMenu menu; public Login() { InitializeComponent(); } private void administratorLoginButton_Click(object sender, EventArgs e) { string username1 = "08247739"; string password1 = "08247739"; if ((userNameTxt.Text.Length) == 0) MessageBox.Show("Please enter your username!"); else if ((passwordTxt.Text.Length) == 0) MessageBox.Show("Please enter your password!"); else if (userNameTxt.Text.Equals("") || passwordTxt.Text.Equals("")) MessageBox.Show("Invalid Username or Password!"); else { if (this.userNameTxt.Text == username1 && this.passwordTxt.Text == password1) MessageBox.Show("Welcome Administrator!", "Administrator Login"); menu = new MainMenu(); menu.Show(); this.Hide(); } } private void managerLoginButton_Click(object sender, EventArgs e) { { string username2 = "1111"; string password2 = "1111"; if ((userNameTxt.Text.Length) == 0) MessageBox.Show("Please enter your username!"); else if ((passwordTxt.Text.Length) == 0) MessageBox.Show("Please enter your password!"); else if (userNameTxt.Text.Equals("") && passwordTxt.Text.Equals("")) MessageBox.Show("Invalid Username or Password!"); else { if (this.userNameTxt.Text == username2 && this.passwordTxt.Text == password2) MessageBox.Show("Welcome Manager!", "Manager Login"); menu = new MainMenu(); menu.Show(); this.Hide(); } } } private void cancelButton_Click(object sender, EventArgs e) { this.Close(); } } } Where is the error? What am I doing wrong?

    Read the article

  • Singleton method not getting called in Cocos2d

    - by jini
    I am calling a Singleton method that does not get called when I try doing this. I get no errors or anything, just that I am unable to see the CCLOG message. Under what reasons would a compiler not give you error and not allow you to call a method? [[GameManager sharedGameManager] openSiteWithLinkType:kLinkTypeDeveloperSite]; The method is defined as follows: -(void)openSiteWithLinkType:(LinkTypes)linkTypeToOpen { CCLOG(@"WE ARE IN openSiteWithLinkType"); //I NEVER SEE THIS MESSAGE NSURL *urlToOpen = nil; if (linkTypeToOpen == kLinkTypeDeveloperSite) { urlToOpen = [NSURL URLWithString:@"http://somesite.com"]; } if (![[UIApplication sharedApplication]openURL:urlToOpen]) { CCLOG(@"%@%@",@"Failed to open url:",[urlToOpen description]); [self runSceneWithID:kMainMenuScene]; } } HERE IS THE CODE TO MY SINGLETON: #import "GameManager.h" #import "MainMenuScene.h" @implementation GameManager static GameManager* _sharedGameManager = nil; @synthesize isMusicON; @synthesize isSoundEffectsON; @synthesize hasPlayerDied; +(GameManager*) sharedGameManager { @synchronized([GameManager class]) { if (!_sharedGameManager) { [[self alloc] init]; return _sharedGameManager; } return nil; } } +(id)alloc { @synchronized ([GameManager class]) { NSAssert(_sharedGameManager == nil,@"Attempted to allocate a second instance of the Game Manager singleton"); _sharedGameManager = [super alloc]; return _sharedGameManager; } return nil; } -(id)init { self = [super init]; if (self != nil) { //Game Manager initialized CCLOG(@"Game Manager Singleton, init"); isMusicON = YES; isSoundEffectsON = YES; hasPlayerDied = NO; currentScene = kNoSceneUninitialized; } return self; } -(void) runSceneWithID:(SceneTypes)sceneID { SceneTypes oldScene = currentScene; currentScene = sceneID; id sceneToRun = nil; switch (sceneID) { case kMainMenuScene: sceneToRun = [MainMenuScene node]; break; default: CCLOG(@"Unknown Scene ID, cannot switch scenes"); return; break; } if (sceneToRun == nil) { //Revert back, since no new scene was found currentScene = oldScene; return; } //Menu Scenes have a value of < 100 if (sceneID < 100) { if (UI_USER_INTERFACE_IDIOM() != UIUserInterfaceIdiomPad) { CGSize screenSize = [CCDirector sharedDirector].winSizeInPixels; if (screenSize.width == 960.0f) { //iPhone 4 retina [sceneToRun setScaleX:0.9375f]; [sceneToRun setScaleY:0.8333f]; CCLOG(@"GM: Scaling for iPhone 4 (retina)"); } else { [sceneToRun setScaleX:0.4688f]; [sceneToRun setScaleY:0.4166f]; CCLOG(@"GM: Scaling for iPhone 3G or older (non-retina)"); } } } if ([[CCDirector sharedDirector] runningScene] == nil) { [[CCDirector sharedDirector] runWithScene:sceneToRun]; } else { [[CCDirector sharedDirector] replaceScene:sceneToRun]; } } -(void)openSiteWithLinkType:(LinkTypes)linkTypeToOpen { CCLOG(@"WE ARE IN openSiteWithLinkType"); NSURL *urlToOpen = nil; if (linkTypeToOpen == kLinkTypeDeveloperSite) { urlToOpen = [NSURL URLWithString:@"http://somesite.com"]; } if (![[UIApplication sharedApplication]openURL:urlToOpen]) { CCLOG(@"%@%@",@"Failed to open url:",[urlToOpen description]); [self runSceneWithID:kMainMenuScene]; } } -(void) test { CCLOG(@"this is test"); } @end

    Read the article

  • non blocking client server chat application in java using nio

    - by Amith
    I built a simple chat application using nio channels. I am very much new to networking as well as threads. This application is for communicating with server (Server / Client chat application). My problem is that multiple clients are not supported by the server. How do I solve this problem? What's the bug in my code? public class Clientcore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void coreClient() { System.out.println("please enter the text"); BufferedReader stdin=new BufferedReader(new InputStreamReader(System.in)); SocketChannel sc = null; try { sc = SocketChannel.open(); sc.configureBlocking(false); sc.connect(new InetSocketAddress(8888)); int i=0; while (!sc.finishConnect()) { } for(int ii=0;ii>-22;ii++) { System.out.println("Enter the text"); String HELLO_REQUEST =stdin.readLine().toString(); if(HELLO_REQUEST.equalsIgnoreCase("end")) { break; } System.out.println("Sending a request to HelloServer"); ByteBuffer buffer = ByteBuffer.wrap(HELLO_REQUEST.getBytes()); sc.write(buffer); } } catch (IOException e) { e.printStackTrace(); } finally { if (sc != null) { try { sc.close(); } catch (Exception e) { e.printStackTrace(); } } } } public void run() { try { coreClient(); } catch(Exception ej) { ej.printStackTrace(); }}} public class ServerCore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void run() { try { coreServer(); } catch(Exception ej) { ej.printStackTrace(); } } private void coreServer() { try { ServerSocketChannel ssc = ServerSocketChannel.open(); try { ssc.socket().bind(new InetSocketAddress(8888)); while (true) { sckt_manager=SelectorProvider.provider().openSelector(); ssc.configureBlocking(false); SocketChannel sc = ssc.accept(); register_server(ssc,SelectionKey.OP_ACCEPT); if (sc == null) { } else { System.out.println("Received an incoming connection from " + sc.socket().getRemoteSocketAddress()); printRequest(sc); System.err.println("testing 1"); String HELLO_REPLY = "Sample Display"; ByteBuffer buffer = ByteBuffer.wrap(HELLO_REPLY.getBytes()); System.err.println("testing 2"); sc.write(buffer); System.err.println("testing 3"); sc.close(); }}} catch (IOException e) { e.printStackTrace(); } finally { if (ssc != null) { try { ssc.close(); } catch (IOException e) { e.printStackTrace(); } } } } catch(Exception E) { System.out.println("Ex in servCORE "+E); } } private static void printRequest(SocketChannel sc) throws IOException { ReadableByteChannel rbc = Channels.newChannel(sc.socket().getInputStream()); WritableByteChannel wbc = Channels.newChannel(System.out); ByteBuffer b = ByteBuffer.allocate(1024); // read 1024 bytes while (rbc.read(b) != -1) { b.flip(); while (b.hasRemaining()) { wbc.write(b); System.out.println(); } b.clear(); } } public void register_server(ServerSocketChannel ssc,int selectionkey_ops)throws Exception { ssc.register(sckt_manager,selectionkey_ops); }} public class HelloClient { public void coreClientChat() { Clientcore t=new Clientcore(); new Thread(t).start(); } public static void main(String[] args)throws Exception { HelloClient cl= new HelloClient(); cl.coreClientChat(); }} public class HelloServer { public void coreServerChat() { ServerCore t=new ServerCore(); new Thread(t).start(); } public static void main(String[] args) { HelloServer st= new HelloServer(); st.coreServerChat(); }}

    Read the article

  • (iOS) UI Automation AlertPrompt button/textField accessiblity

    - by lipd
    I'm having a bit of trouble with UI Automation (the built in to iOS tool) when it comes to alertView. First off, I'm not sure where I can set the accessibilityLabel and such for the buttons that are on the alertView. Secondly, although I am not getting an error, I can't get my textField to actually set the value of the textField to something. I'll put up my code for the alertView and the javaScript I am using for UI Automation. UIATarget.onAlert = function onAlert(alert) { // Log alerts and bail, unless it's the one we want var title = alert.name(); UIALogger.logMessage("Alert with title '" + title + "' encountered!"); alert.logElementTree(); if (title == "AlertPrompt") { UIALogger.logMessage(alert.textFields().length + ''); target.delay(1); alert.textFields()["AlertText"].setValue("AutoTest"); target.delay(1); return true; // Override default handler } else return false; } var target = UIATarget.localTarget(); var application = target.frontMostApp(); var mainWindow = application.mainWindow(); mainWindow.logElementTree(); //target.delay(1); //mainWindow.logElementTree(); //target.delay(1); var tableView = mainWindow.tableViews()[0]; var button = tableView.buttons(); //UIALogger.logMessage("Num buttons: " + button.length); //UIALogger.logMessage("num Table views: " + mainWindow.tableViews().length); //UIALogger.logMessage("Number of cells: " + tableView.cells().length); /*for (var currentCellIndex = 0; currentCellIndex < tableView.cells().length; currentCellIndex++) { var currentCell = tableView.cells()[currentCellIndex]; UIALogger.logStart("Testing table option: " + currentCell.name()); tableView.scrollToElementWithName(currentCell.name()); target.delay(1); currentCell.tap();// Go down a level target.delay(1); UIATarget.localTarget().captureScreenWithName(currentCell.name()); //mainWindow.navigationBar().leftButton().tap(); // Go back target.delay(1); UIALogger.logPass("Testing table option " + currentCell.name()); }*/ UIALogger.logStart("Testing add item"); target.delay(1); mainWindow.navigationBar().buttons()["addButton"].tap(); target.delay(1); if(tableView.cells().length == 5) UIALogger.logPass("Successfully added item to table"); else UIALogger.logFail("FAIL: didn't add item to table"); Here's what I'm using for the alertView #import "AlertPrompt.h" @implementation AlertPrompt @synthesize textField; @synthesize enteredText; - (id)initWithTitle:(NSString *)title message:(NSString *)message delegate:(id)delegate cancelButtonTitle:(NSString *)cancelButtonTitle okButtonTitle:(NSString *)okayButtonTitle withOrientation:(UIInterfaceOrientation) orientation { if ((self == [super initWithTitle:title message:message delegate:delegate cancelButtonTitle:cancelButtonTitle otherButtonTitles:okayButtonTitle, nil])) { self.isAccessibilityElement = YES; self.accessibilityLabel = @"AlertPrompt"; UITextField *theTextField; if(orientation == UIInterfaceOrientationPortrait) theTextField = [[UITextField alloc] initWithFrame:CGRectMake(12.0, 45.0, 260.0, 25.0)]; else theTextField = [[UITextField alloc] initWithFrame:CGRectMake(12.0, 30.0, 260.0, 25.0)]; [theTextField setBackgroundColor:[UIColor whiteColor]]; [self addSubview:theTextField]; self.textField = theTextField; self.textField.isAccessibilityElement = YES; self.textField.accessibilityLabel = @"AlertText"; [theTextField release]; CGAffineTransform translate = CGAffineTransformMakeTranslation(0.0, 0.0); [self setTransform:translate]; } return self; } - (void)show { [textField becomeFirstResponder]; [super show]; } - (NSString *)enteredText { return [self.textField text]; } - (void)dealloc { //[textField release]; [super dealloc]; } @end Thanks for any help!

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >