Search Results

Search found 13332 results on 534 pages for 'compatibility level'.

Page 397/534 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • What does path finding in internet routing do and how is it different from A*?

    - by alan2here
    Note: If you don't understand this question then feel free to ask clarification in the comments instead of voting down, it might be that this question needs some more work at the moment. I've been directed here from the Stack Excange chat room Root Access because my question didn't fit on Super User. In many aspects path finding algorithms like A star are very similar to internet routing. For example: A node in an A* path finding system can search for a path though edges between other nodes. A router that's part of the internet can search for a route though cables between other routers. In the case of A*, open and closed lists are kept by the system as a whole, sepratly from any individual node as well as each node being able to temporarily store a state involving several numbers. Routers on the internet seem to have remarkable properties, as I understand it: They are very performant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, there's never any doubling back for example. Similar IP addresses don't have to be geographically nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router doesn't have to store a list of all the addresses each of it's directions leads most directly to. I'm looking for a basic, general, high level description of the algorithms workings from the point of view of an individual router. Does anyone have one? I presume public internet routers don't use A* as the overheads would be to large, and scale to poorly. I also presume there is a single method worldwide because it seems as if must involve a lot of transferring data to update and communicate a reasonable amount of state between neighboring routers. For example, perhaps the amount of data that needs to be stored in each router scales logarithmically with the number of routers that exist worldwide, the detail and reliability of the routing is reduced over increasing distances, there is increasing backtracking involved in parts of the network that are less geographically uniform or maybe each router really does perform an A* style search, temporarily maintaining open and closed lists when a packet arrives.

    Read the article

  • Collaborative work (small team) - Best practices

    - by LEM01
    I'm currently working in a very small team of programmers (2-3) and I'm looking for advices/best practices on how to organise our work. We're all working on the same application using PHP. Today we're kind of all working on our way. Today situation: List item that have to be worked on by each dev 1/week. What has to be done is defined at a high functional level (ex: Build the search engine for this product..) Commit / merge our individual branches (git) every week before the next meeting No real dev rules, no code review No test written (aouutch) Problems faced: Code quality issue: discovering someone else code is sometime tough (inline, variable+function+class names, spaces, comments..) Changes in already existing classes (impact on someone else work) Responsibility of each dev unclear: after getting someone else code and discover something messy, should I make the change? Should he make the change? How to plan those things,... What I'm looking for: Basically I'm looking into structuring the way we develop things in order to avoid frustration and improve overall quality. How to define coding standards (naming convention, code rules...)? Do you you any validation script to make sure code is valid before committing? Do you think that defining an architect role in the team is needed? Someone that would actually define what has to be developed during the next phase. By defining interfaces or class descriptions that have to be written. (Does it make sense in such a small team?) Today we're losing time into understanding what others did or tried to do, we're also losing time in discussion like "you should have done it that way! Why is this class doing that and not that..? Shouldn't we have a embedded class rather that this set of data...". I'm looking into a work process, maybe with more defined responsibilities and process in order to improve our performance. If you have experience, advices, best practices or anything to share that we could benefit from it will be much appreciated! Thanks a lot for your time!

    Read the article

  • Oracle Brings Java to iOS Devices (and Android too)

    - by Shay Shmeltzer
    Java developer, did you ever wish that you can take your Java skills and apply them to building applications for iOS mobile devices? Well, now you can! With the new Oracle ADF Mobile solution, Oracle has created a unique technology that allows developers to use the Java language and develop applications that install and run on both iOS and Android mobile devices. The solution is based on a thin native container that installs as part of your application. The container is able to run the same application you develop unchanged on both Android and iOS devices. One part of the container is a headless lightweight JVM based on the Java ME CDC technology. This allows the execution of Java code on your mobile device. Java is used for building business logic, accessing local SQLite encrypted database, and invoking and interacting with remote services. Java concept on the UI too To further help transition Java developers to mobile developers, ADF Mobile borrows familiar concepts from the world of JSF to make the UI development experience simpler. The user interface layer of Oracle ADF Mobile is rendered with HTML5 which delivers native user experience on the devices, including animations and gesture support. Using a set of rich components, developers can create mobile pages without needing to write low level HTML5 and JavaScript code. The components cover everything from simple controls such as text fields, date pickers, buttons and links, to advanced data visualization components such as graphs, gauges and maps, and including unique mobile UI patterns such as lists, and toggle selectors. Want to see the components in action? Access this demo instance from your mobile device. Need to further customize the look and feel? You can use CSS3 to achieve this. A controller layer - similar in functionality to the JSF controller - allows developer to simplify the way they build navigation between pages. The logic behind the pages is written in managed beans with various scopes – again similar to the JSF approach. Need to interact with device features like camera, SMS, Contacts etc? Oracle conveniently packaged access to these services in a set of services that you can just drag and drop into your pages as buttons and links, or code into your managed beans Java calls to activate. Underneath the covers this layer is implemented using the open source phonegap solution. With the new Oracle ADF Mobile solution, transferring your Java skills into the Mobile world has become much easier. Check out this development experience demo. And then go and download JDeveloper and the ADF Mobile extension and try it out on your own. For more on ADF Mobile, see the ADF Mobile OTN page.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • Fans running very fast on MacBook Pro 8.1 ubuntu 12.04

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • Separate Action from Assertion in Unit Tests

    - by DigitalMoss
    Setup Many years ago I took to a style of unit testing that I have come to like a lot. In short, it uses a base class to separate out the Arrangement, Action and Assertion of the test into separate method calls. You do this by defining method calls in [Setup]/[TestInitialize] that will be called before each test run. [Setup] public void Setup() { before_each(); //arrangement because(); //action } This base class usually includes the [TearDown] call as well for when you are using this setup for Integration tests. [TearDown] public void Cleanup() { after_each(); } This often breaks out into a structure where the test classes inherit from a series of Given classes that put together the setup (i.e. GivenFoo : GivenBar : WhenDoingBazz) with the Assertions being one line tests with a descriptive name of what they are covering [Test] public void ThenBuzzSouldBeTrue() { Assert.IsTrue(result.Buzz); } The Problem There are very few tests that wrap around a single action so you end up with lots of classes so recently I have taken to defining the action in a series of methods within the test class itself: [Test] public void ThenBuzzSouldBeTrue() { because_an_action_was_taken(); Assert.IsTrue(result.Buzz); } private void because_an_action_was_taken() { //perform action here } This results in several "action" methods within the test class but allows grouping of similar tests (i.e. class == WhenTestingDifferentWaysToSetBuzz) The Question Does someone else have a better way of separating out the three 'A's of testing? Readability of tests is important to me so I would prefer that, when a test fails, that the very naming structure of the tests communicate what has failed. If someone can read the Inheritance structure of the tests and have a good idea why the test might be failing then I feel it adds a lot of value to the tests (i.e. GivenClient : GivenUser : WhenModifyingUserPermissions : ThenReadAccessShouldBeTrue). I am aware of Acceptance Testing but this is more on a Unit (or series of units) level with boundary layers mocked. EDIT : My question is asking if there is an event or other method for executing a block of code before individual tests (something that could be applied to specific sets of tests without it being applied to all tests within a class like [Setup] currently does. Barring the existence of this event, which I am fairly certain doesn't exist, is there another method for accomplishing the same thing? Using [Setup] for every case presents a problem either way you go. Something like [Action("Category")] (a setup method that applied to specific tests within the class) would be nice but I can't find any way of doing this.

    Read the article

  • Complete Beginner to Game Programming and Unreal Engine 4, Looking For Advice [on hold]

    - by onemic
    I am currently a 2nd year programming student(Just finished my first year so I will be starting my second year in September) and have mainly learned C and C++ in my classes. In terms of what I know of C++, I know about general inheritance, polymorphism, overloading operators, iterators, a little bit about templates(only class and function templates) etc. but not of the more advanced topics like linked lists and other sequential containers(containers in general I guess), enumerations, most of the standard library(other than like strings and vectors), and probably a bunch of other stuff I dont even know about yet. I subscribed to Unreal Engine 4 as I was very intrigued by their Unreal Tournament announcement earlier this month, especially after hearing that UE4 is going completely C++. Of course my end goal in doing this programming program is to eventually go into game/graphics programming. Since it's my summer off, I thought what better way then to actually apply some of my skills to a personal project so I actually have a firmer understanding of C++ past what my professors tell me. My questions are this: What would be the best way to start off making a small personal game in UE4 as a project for the summer? What should I be aiming for, especially for someone that is still learning C++? Should I focus on making a simple 2D game rather than a 3D one to get started? Seeing the Flappy Chicken showcase intrigued me because before I thought the UE engine was pretty much pigeonholed into being for FPS games What should my expectations be going into UE4 and a game engine for the first time?(UE4 will be my first foray into making a game) What can I expect to gain from making things in UE4, in terms of making games and in terms of further fleshing out my knowledge of C++? Would you recommend I start off 100% using C++ for scripting or using the visual blueprints? Since I'm not a designer, how would I be able to add objects and designs to my game? For someone at my level is retaining the UE4 subscription worth it or is it better to cancel and resub when I learn enough about UE4 and C++? Lastly is there anything to be gained in terms of knowledge/insight through me looking at the source code for UE4? I opened it in VS2013, but noticed that most of the files were C# files and not cpp's. Thanks in advance for taking the time to answer.

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • Crash when using datablocks

    - by scorcher24
    I have really throughly searched the net and could not find any solution for this so I ask for help here. Anyway, I have this datablock in datablocks.cs: datablock t2dSceneObjectDatablock(EnemyShipConfig) { canSaveDynamicFields = "1"; Layer = "3"; size = "64 64"; CollisionActiveSend = "1"; CollisionActiveReceive = "1"; CollisionCallback = true; CollisionLayers = "3"; CollisionDetectionMode = "POLYGON"; CollisionPolyList = "0.00 -0.791 0.756 0.751 -0.746 0.732"; UsesPhysics = "0"; Rotation = "-90"; WorldLimitMode = "KILL"; WorldLimitMax = "880 360"; WorldLimitMin = "-765 -436"; minFireRate = "2000"; maxFireRate = "1200"; laserSpeed = "800"; minSpeed = "100"; maxSpeed = "150"; }; This is an exact reproduction of an object that I have manually edited in the editor. So far, I just used clone() to get as many enemies as I need, while it was out of sight. It is a r-type style shooter, so I need a variable amount of enemies. Since clone() spams my log, I decided to use datablocks, since it is also more flexible. That's what I get when I use clone(): Con::execute - 0 has no namespace: onRemoveFromScene However, once spawning begins, my game freezes and crashes: function SpawnEnemy() { //%spawnedEnemy = EnemyShipMaster.clone(true); %spawnedEnemy = new t2dStaticSprite() { class = "EnemyShip"; sceneGraph = $global_sceneGraph; datablock = "EnemyShipConfig"; imageMap = "starshipImageMap"; layer = 3; }; %speed = getRandom(%spawnedEnemy.minSpeed, %spawnedEnemy.maxSpeed); %y = getRandom(-320, 320); // Set Properties %spawnedEnemy.setPositionX(700); %spawnedEnemy.setPositionY(%y); %spawnedEnemy.setVisible(true); %spawnedEnemy.setLinearVelocityX( -%speed ); %spawnedEnemy.setTimerOn( getRandom( %spawnedEnemy.maxFireRate, %spawnedEnemy.minFireRate ) ); } // Definition of $global_sceneGraph from game.cs: $global_sceneGraph = sceneWindow2D.loadLevel(%level); As I said, it works fine when I use clone() (which is commented out here), but my log gets spammed. I really hope someone can shed some light for me, this is driving me crazy.

    Read the article

  • Is this an effective monetization method for an Android game? [on hold]

    - by Matthew Page
    The short version: I plan to make an Android puzzle game where the user tries to get 3-6 numbers to their predetermined goal numbers. The free version of the app will have three predetermined levels (easy, medium, hard). The full version ($0.99, probably) will have a level generator where there will be unlimited easy, medium, or hard levels, as well as a custom difficulty option where users can set specific vales to the number of numbers to equate to their goal, the number of buttons to use, etc. Users will also have the option to get a one-time "hint" for a fee of $0.49, or unlimited hints for a one-time fee of $2.99. The long version: Mechanics of Game and Victory The application is a number puzzle. When the user begins a new game, depending on the input by the user, between 3 and 6 numbers show up on the top of the screen, and between 3 and 6 buttons show up on the bottom of the screen. The buttons all have two options: to increase every number the same way, or decrease every number the same way. The buttons either use addition / subtraction, multiplication / division, or exponents / roots, all depending on the number displayed on the button. Addition buttons are green, multiplication buttons are blue, and exponential buttons are red. The user wins when all of the numbers displayed on the screen equate to their goal number, displayed below each number. Monetization If the user is playing the full (priced) version of the app, upon the start of the game, the user will be confronted with a dialogue asking for the number of buttons and the number of numbers to equate in the game. Then, based on the user input, a random puzzle will be generated. If the user is playing the free version of the app, the user will be asked to either play an “easy”, “hard”, or “expert” puzzle. A pre-determined puzzle from each category will be used in the game. If the user has played that puzzle before, a dialogue will show saying this to the user and advertising the full version of the app. The full version of the app will also be advertised upon the successful or in successful completion of a puzzle. Upon exiting this advertisement, another full screen advertisement will appear from a third party. Also, the solution to the puzzle should be stored by the program, and if the user pays a small fee, he/she can see a hint to the solution to the program. In the free version of the app, the user may use their first hint for free. Also, the user can use unlimited hints for a slightly larger fee. Is this an effective monetization method?

    Read the article

  • Platformer Collision Error [closed]

    - by Connor
    I am currently working on a relatively simple platform game that has an odd bug.You start the game by falling onto the ground (you spawn a few blocks above the ground), but when you land your feet get stuck INSIDE the world and you can't move until you jump. Here's what I mean: The player's feet are a few pixels below the ground level. However, this problem only occurs in 3 places throughout the map and only in those 3 select places. I'm assuming that the problem lies within my collision detection code but I'm not entirely sure, as I don't get an error when it happens. public boolean isCollidingWithBlock(Point pt1, Point pt2) { //Checks x for(int x = (int) (this.x / Tile.tileSize); x < (int) (this.x / Tile.tileSize + 4); x++) { //Checks y for(int y = (int) (this.y / Tile.tileSize); y < (int) (this.y / Tile.tileSize + 4); y++) { if(x >= 0 && y >= 0 && x < Component.dungeon.block.length && y < Component.dungeon.block[0].length) { //If the block is not air if(Component.dungeon.block[x][y].id != Tile.air) { //If the player is in contact with point one or two on the block if(Component.dungeon.block[x][y].contains(pt1) || Component.dungeon.block[x][y].contains(pt2)) { //Checks for specific blocks if(Component.dungeon.block[x][y].id == Tile.portalBlock) { Component.isLevelDone = true; } if(Component.dungeon.block[x][y].id == Tile.spike) { Health.health -= 1; Component.isJumping = true; if(Health.health == 0) { Component.isDead = true; } } return true; } } } } } return false; } What I'm asking is how I would fix the problem. I've looked over my code for quite a while and I'm not sure what's wrong with it. Also, if there's a more efficient way to do my collision checking then please let me know! I hope that is enough information, if it's not just tell me what you need and I'll be sure to add it. Thank you! [EDIT] Jump code: if(!isJumping && !isCollidingWithBlock(new Point((int) x + 2, (int) (y + height)), new Point((int) (x + width + 2), (int) (y + height)))) { y += fallSpeed; //sY is the screen's Y. The game is a side-scroller Component.sY += fallSpeed; } else { if(Component.isJumping) { isJumping = true; } } if(isJumping) { if(!isCollidingWithBlock(new Point((int) x + 2, (int) y), new Point((int) (x + width + 2), (int) y))) { if(jumpCount >= jumpHeight) { isJumping = false; jumpCount = 0; } else { y -= jumpSpeed; Component.sY -= jumpSpeed; jumpCount += 1; } } else { isJumping = false; jumpCount = 0; } }

    Read the article

  • Is there a factory pattern to prevent multiple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquely define what the object is (what node, what edge etc), and the others that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methods, so all these objects act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approach which accepts a Builder object and constructs the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquely defines the type of object (this is node A to node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows its state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overriding of serialization methods to make it work (ensure that when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in its place), as well as using a weakHashMap as if it was a weakHashSet to know whether an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to its own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state that is different than my memory; to tell my memory when its own state is inconsistent. I believe this is possible despite their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect that I'm posting).

    Read the article

  • SharePoint 2010 HierarchicalConfig Caching Problem

    - by Damon
    We've started using the Application Foundations for SharePoint 2010 in some of our projects at work, and I came across a nasty issue with the hierarchical configuration settings.  I have some settings that I am storing at the Farm level, and as I was testing my code it seemed like the settings were not being saved - at least that is what it appeared was the case at first.  However, I happened to reset IIS and the settings suddenly appeared.  Immediately, I figured that it must be a caching issue and dug into the code base.  I found that there was a 10 second caching mechanism in the SPFarmPropertyBag and the SPWebAppPropertyBag classes.  So I ran another test where I waited 10 seconds to make sure that enough time had passed to force the caching mechanism to reset the data.  After 10 minutes the cache had still not cleared.  After digging a bit further, I found a double lock check that looked a bit off in the GetSettingsStore() method of the SPFarmPropertyBag class: if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval)) { //Need to exist so don't deadlock. rrLock.EnterWriteLock(); try { //make sure first another thread didn't already load... if (_settingStore == null) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } } finally { rrLock.ExitWriteLock(); } } What ends up happening here is the outer check determines if the _settingStore is null or the cache has expired, but the inner check is just checking if the _settingStore is null (which is never the case after the first time it's been loaded).  Ergo, the cached settings are never reset.  The fix is really easy, just add the cache checking back into the inner if statement. //make sure first another thread didn't already load... if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } And then it starts working just fine. as long as you wait at least 10 seconds for the cache to clear.

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • Fans running very fast on MacBook Pro 8.1

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • Is this a pattern? Should it be?

    - by Arkadiy
    The following is more of a statement than a question - it describes something that may be a pattern. The question is: is this a known pattern? Or, if it's not, should it be? I've had a situation where I had to iterate over two dissimilar multi-layer data structures and copy information from one to the other. Depending on particular use case, I had around eight different kinds of layers, combined in about eight different combinations: A-B-C B-C A-C D-E A-D-E and so on After a few unsuccessful attempts to factor out the repetition of per-layer iteration code, I realized that the key difficulty in this refactoring was the fact that the bottom level needed access to data gathered at higher levels. To explicitly accommodate this requirement, I introduced IterationContext class with a number of get() and set() methods for accumulating the necessary information. In the end, I had the following class structure: class Iterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const = 0; }; class RecursingIterator : public Iterator { RecursingIterator(const Iterator &below); }; class IterateOverA : public RecursingIterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // Iterate over members in dataStructure1 // locate corresponding item in dataStructure2 (passed via context) // and set it in the context // invoke the sub-iterator }; class IterateOverB : public RecursingIterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // iterate over members dataStructure2 (form context) // set dataStructure2's item in the context // locate corresponding item in dataStructure2 (passed via context) // invoke the sub-iterator }; void main() { class FinalCopy : public Iterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // copy data from structure 1 to structure 2 in the context, // using some data from higher levels as needed } } IterationContext ctx(dateStructure2); IterateOverA(IterateOverB(FinalCopy())).iterate(dataStructure1, ctx); } It so happens that dataStructure1 is a uniform data structure, similar to XML DOM in that respect, while dataStructure2 is a legacy data structure made of various structs and arrays. This allows me to pass dataStructure1 outside of the context for convenience. In general, either side of the iteration or both sides may be passed via context, as convenient. The key situation points are: complicated code that needs to be invoked in "layers", with multiple combinations of layer types possible at the bottom layer, the information from top layers needs to be visible. The key implementation points are: use of context class to access the data from all levels of iteration complicated iteration code encapsulated in implementation of pure virtual function two interfaces - one aware of underlying iterator, one not aware of it. use of const & to simplify the usage syntax.

    Read the article

  • RPi and Java Embedded GPIO: It all begins with hardware

    - by hinkmond
    So, you want to connect low-level peripherals (like blinky-blinky LEDs) to your Raspberry Pi and use Java Embedded technology to program it, do you? You sick foolish masochist. No, just kidding! That's awesome! You've come to the right place. I'll step you though it. And, as with many embedded projects, it all begins with hardware. So, the first thing to do is to get acquainted with the GPIO header on your RPi board. A "header" just means a thingy with a bunch of pins sticking up from it where you can connect wires. See the the red box outline in the photo. Now, there are many ways to connect to that header outlined by the red box in the photo (which the RPi folks call the P1 header). One way is to use a breakout kit like the one at Adafruit. But, we'll just use jumper wires in this example. So, to connect jumper wires to the header you need a map of where to connect which wire. That's why you need to study the pinout in the photo. That's your map for connecting wires. But, as with many things in life, it's not all that simple. RPi folks have made things a little tricky. There are two revisions of the P1 header pinout. One for older boards (RPi boards made before Sep 2012), which is called Revision 1. And, one for those fancy 512MB boards that were shipped after Sep 2012, which is called Revision 2. So, first make sure which board you have: either you have the Model A or B with 128MB or 256MB built before Sep 2012 and you need to look at the pinout for Rev. 1, or you have the Model B with 512MB and need to look at Rev. 2. That's all you need for now. More to come... Hinkmond

    Read the article

  • "Expecting A Different Result?" (2 of 3 in 'No Customer Left Behind' Series)

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development Many companies already have some type of customer experience initiative in process or one that could be framed as such. The challenge is that the initiatives too often are started in a department silo, don't have the right level of executive sponsorship, or have been initiated without the necessary insight and strategic business alignment. You can't keep doing the same things, give it a customer experience name, and expect a different result. You can't continue to just compete on price or features - that is not sustainable in commoditized markets. And ultimately, investing in technology alone doesn't solve customer experience problems; it just adds to the complexity of them. You need a customer experience strategy and approach on how to execute a customer-centric worldview within your business. To develop this, you must take an outside in journey on how your customers are interacting with your business to establish a benchmark of your customers' experiences. Then you must get cross-functional alignment on what you are trying to achieve, near, mid, and long term. Your execution of that strategy should be based on a customer experience approach: Understand your customer: You need to capture the insights across interactions, channels (including social), and personas to better understand whom to serve, how to serve them, and when to serve them. Not all experiences or customers are equal, so leverage this insight to understand the strategic business objectives you need to address. Then determine which experiences can be improved immediately and which over time to get the result you need. Empower your ecosystem: You need to align your front-line employees with your strategy and give them the power, insight, and tools that allow them to cultivate a culture around strengthening the relationships with your customers. You also need to provide the transparency, access, and collaboration that enable your customers and partners to self serve and self solve and to share with ease. Adapt your business: You need to enable the discipline of agility within your organization and infrastructure so that you can innovate, tailor, and personalize experiences. This needs to be done both reactively from insight and proactively in real time so you can stay ahead of shifting market trends and evolving consumer behaviors. No longer will the old approaches provide the same returns. To compete, differentiate, and win in a world where the customer has the power, you must execute a strategy that is sure to deliver a better brand experience for your customers. Note: This is Part 2 in a three-part series. Part 1 is here. Stop back for Part 3 on November 28.

    Read the article

  • Website directory structure regarding subdomains, www, and "global" content

    - by Pawnguy7
    I am trying to make a homemade HTTP server. It occurs to me, though, I never fully understood what you might call "relativity" among web pages. I have come across that www. is a subdomain, and I understand its original purpose. IT sounds like, in general, you would redirect (is that 301 or 302?) it to a... non-subdomain, sort of. As in, redirecting www.example.com to example.com. I am not entirely sure how to make this work when retrieving files for an HTTP server though. I would assume that example.com would be the root, and www manifests as a folder within it. I am unsure. There is also the question of multi-level subdomains, e.g. subdomain2.subdomain1.example.com. It seems to me they are structured "backwards", where you go from the root left in folder structure. In this situation, subdomain2 is a directory within subdomain1, which is a directory in the root. Finally, it occurs to me I might want a sort of global location. For example, maybe all subdomains still use an image as a logo. It makes more sense to me that there is one image, rather than each having a copy. In the same way, albiet more doubtfully, you might have global CSS (though that is a bit contrary to the idea of a subdomain in the first place), or a javascript that is commonly used. (more efficient than each having its copy and better for organization purposes). Finally, mabye you have a global 404 page. I think this might be the case where you have user-created subdomains (say bloggername.example.com), where example.com still has a default 404 when either a subdomain does not exist or page does not exists under a valid blogger. I am confused on what the directory structure for this should be. To summarize: Should and how it have a global files not in a subdirectory, how should www. be handled, (or how a now www or other subdomain should be handled), and the pattern for root/subdomain, as well as subdomain within subdomains (order-wise). Sorry this is multiple questions, but I feel at the root they are all related to the directory.

    Read the article

  • Entity Framework 5, separating business logic from model - Repository?

    - by bnice7
    I am working on my first public-facing web application and I’m using MVC 4 for the presentation layer and EF 5 for the DAL. The database structure is locked, and there are moderate differences between how the user inputs data and how the database itself gets populated. I have done a ton of reading on the repository pattern (which I have never used) but most of my research is pushing me away from using it since it supposedly creates an unnecessary level of abstraction for the latest versions of EF since repositories and unit-of-work are already built-in. My initial approach is to simply create a separate set of classes for my business objects in the BLL that can act as an intermediary between my Controllers and the DAL. Here’s an example class: public class MyBuilding { public int Id { get; private set; } public string Name { get; set; } public string Notes { get; set; } private readonly Entities _context = new Entities(); // Is this thread safe? private static readonly int UserId = WebSecurity.GetCurrentUser().UserId; public IEnumerable<MyBuilding> GetList() { IEnumerable<MyBuilding> buildingList = from p in _context.BuildingInfo where p.Building.UserProfile.UserId == UserId select new MyBuilding {Id = p.BuildingId, Name = p.BuildingName, Notes = p.Building.Notes}; return buildingList; } public void Create() { var b = new Building {UserId = UserId, Notes = this.Notes}; _context.Building.Add(b); _context.SaveChanges(); // Set the building ID this.Id = b.BuildingId; // Seed 1-to-1 tables with reference the new building _context.BuildingInfo.Add(new BuildingInfo {Building = b}); _context.GeneralInfo.Add(new GeneralInfo {Building = b}); _context.LocationInfo.Add(new LocationInfo {Building = b}); _context.SaveChanges(); } public static MyBuilding Find(int id) { using (var context = new Entities()) // Is this OK to do in a static method? { var b = context.Building.FirstOrDefault(p => p.BuildingId == id && p.UserId == UserId); if (b == null) throw new Exception("Error: Building not found or user does not have access."); return new MyBuilding {Id = b.BuildingId, Name = b.BuildingInfo.BuildingName, Notes = b.Notes}; } } } My primary concern: Is the way I am instantiating my DbContext as a private property thread-safe, and is it safe to have a static method that instantiates a separate DbContext? Or am I approaching this all wrong? I am not opposed to learning up on the repository pattern if I am taking the total wrong approach here.

    Read the article

  • How can I tweak this A* search pathfinding algorithm to handle different terrain movement values?

    - by user422318
    I'm creating a 2D map-based action game with similar interaction design as Diablo II. In other words, the player clicks around a map to move their player. I just finished player movement and am moving on to pathfinding. In the game, enemies should charge the player's character. There are also five different terrain types that give different movement bonuses. I want the AI to take advantage of these terrain bonuses as they try to reach the player. I was told to check out the A* search algorithm (http://en.wikipedia.org/wiki/A*_search_algorithm). I'm doing this game in HTML5 and JavaScript, and found a version in JavaScript: http://www.briangrinstead.com/blog/astar-search-algorithm-in-javascript I'm trying to figure out how to tweak it though. Below are my ideas about what I need to change. What else do I need to worry about? When I create a graph, I will need to initialize the 2D array I pass in passed on with a traversal of a map that corresponds to the different terrain types. in graph.js: "GraphNodeType" definition needs to be modified to handle the 5 terrain types. There will be no walls. in astar.js: The g and h scoring will need to be modified. How should I do this? in astar.js: isWall() should probably be removed. My game doesn't have walls. in astar.js: I'm not sure what this is. I think it indicates a node that isn't valid to be processed. When would this happen, though? At a high level, how do I change this algorithm from "oh, is there a wall there?" to "will this terrain get me to the player faster than the terrain around me?" Because of time, I'm also debating reusing my Bresenham algorithm for the enemies. Unfortunately, the different terrain movement bonuses won't be used by the AI, which will make the game suck. :/ I'd really like to have this in for the prototype, but I'm not a developer by trade nor am I a computer scientist. :D If you know of any code that does what I'm looking for, please share! Sanity check tips for this are also appreciated.

    Read the article

  • Determining whether a visitor reached two different pages in one visit

    - by Shaun
    I have a funnel that I would like to track. Tracking this funnel won't work with the default "goal funnel" tracking in Google due to the fact that I am mixing events and pageviews. As such, I've created a series of reports: Visits to demo pages - An inclusion filter on "Page". Triggers an Event on these pages - An inclusion filter on "Page" and "Event Category". Does not bounce - An inclusion filter on "Page" and an exclusion filter on "Exit Page" for these same pages. Reach our storefront - ?? Purchase something - An inclusion filter on "Page" and a report that shows "Transactions". At a basic level, I need to track users who reached demo pages, then reached any page on our store. Intuitively, I created a segment, used two inclusive "Page" filters (one for the demo pages and one for any page in our store), and combined them with an "AND" operator. I thought this was working until I tried to do the same thing in a dashboard widget and on a custom report. When I tried the same thing in those areas, I got zero results. I figured this might be because widgets and custom report filters function differently from segment filters (the options are different for all of them), so I tried applying my "demo page && store page" segment to a report that gave me a general page list. All I saw was a list of the specific pages. I tried simplifying things by creating a custom report that showed all visits to store pages, then applied a segment that filtered for users who visited demo pages. This got me the same numbers as my "demo page && store page" segment, but showed a list of demo pages. This has led me to believe that the "demo page && store page segment" approach and the "demo segment && store report" functionally behave the same. However, this experience has left me questioning whether they're giving me what I want. Are these methods showing me all users who reached both sets of pages? Is there a better/easier/more standard way of doing this aside from looking at visitor flow reports? I'm trying to avoid a combination of custom variables/events and using the horizontal funnel approach since it would consume a large number of our limited goals and seems more complicated than is necessary for tracking this funnel.

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >