Search Results

Search found 19305 results on 773 pages for 'above the gods'.

Page 248/773 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Does openssl errno 104 mean that SSLv2 is disabled?

    - by David
    I want to check if my server has SSLv2 disabled. I am doing this by attempting to connect remotely with openssl with the following shell command. openssl s_client -connect HOSTNAME:443 -ssl2 Most literature I could find on the Internet says if I see something similar to the following error then SSLv2 is properly disabled. 29638:error:1407F0E5:SSL routines:SSL2_WRITE:ssl handshake failure:s2_pkt.c:428: I do get the above error when connecting to my Ubuntu server with SSLv2 disabled in Apache Apache but when I connect to my Windows Server 2008 R2 server with SSLv2 disabled in the registry I get the following output and error. CONNECTED(00000003) write:errno=104 I can't find any literature explaining this output and error. If anybody could explain to me if and why this output and error means that SSLv2 is properly disabled, I would appreciate it. Thanks!

    Read the article

  • Is it customary to write Java domain objects / data transfer objects with public member variables on mobile platforms?

    - by Sean Mickey
    We performed a code review recently of mobile application Java code that was developed by an outside contractor and noticed that all of the domain objects / data transfer objects are written in this style: public class Category { public String name; public int id; public String description; public int parentId; } public class EmergencyContact { public long id; public RelationshipType relationshipType; public String medicalProviderType; public Contact contact; public String otherPhone; public String notes; public PersonName personName; } Of course, these members are then accessed directly everywhere else in the code. When we asked about this, the developers told us that this is a customary performance enhancement design pattern that is used on mobile platforms, because mobile devices are resource-limited environments. It doesn't seem to make sense; accessing private members via public getters/setters doesn't seem like it could add much overhead. And the added benefits of encapsulation seem to outweigh the benefits of this coding style. Is this generally true? Is this something that is normally done on mobile platforms for the reasons given above? All feedback welcome and appreciated -

    Read the article

  • Windows Live Mail doesn't respond to clicking Allow Sender

    - by Karel
    This is a problem that I experience for a couple of years now.. I'm having the latest version of Windows Live Mail. The overall behavior works just fine, but when I want to click 'Allow Sender' in the yellow bar above my e-mail, the text button does nothing. My mouse pointer turns into a finger hand, but the click event does nothing.. Sometimes with other e-mails, the button works and the yellow bar dissappears.. I have also experienced this in previous versions of Windows Live Mail.. Does anybody know what it could be..

    Read the article

  • Grid-Based 2D Lighting Problems

    - by Lemoncreme
    I am aware this question has been asked before, but unfortunately I am new to the language, so the complicated explanations I've found do not help me in the least. I need a lighting engine for my game, and I've tried some procedural lighting systems. This method works the best: if (light[xx - 1, yy] > light[xx, yy]) light[xx, yy] = light[xx - 1, yy] - lightPass; if (light[xx, yy - 1] > light[xx, yy]) light[xx, yy] = light[xx, yy - 1] - lightPass; if (light[xx + 1, yy] > light[xx, yy]) light[xx, yy] = light[xx + 1, yy] - lightPass; if (light[xx, yy + 1] > light[xx, yy]) light[xx, yy] = light[xx, yy + 1] - lightPass; (Subtracts adjacent values by 'lightPass' variable if they are more bright) (It's in a for() loop) This is all fine and dandy except for a an obvious reason: The system favors whatever comes first in the for() loop This is what the above code looks like applied to my game: If I could get some help on creating a new procedural or otherwise lighting system I would really appreciate it!

    Read the article

  • How do "custom software companies" deal with technical debt?

    - by andy
    What are "custom software companies"? By "custom software companies" I mean companies that make their money primarily from building custom, one off, bits of software. Example are agencies or middle-ware companies, or contractors/consultants like Redify. What's the opposite of "custom software companies"? The oposite of the above business model are companies that focus on long term products, whether they be deployable desktop/mobile apps, or SaaS software. A sure fire way to build up technical debt: I work for a company that attempts to focus on a suite of SaaS products. However, due to certain constraints we sometimes end up bending to the will of certain clients and we end building bits of custom software that can only be used for that client. This is a sure fire way to incur technical debt. Now we have a bit of software to maintain that adds nothing to our core product. If custom work is a sure fire way to build technical debt, how do agencies handle it? So that got me thinking. Companies who don't have a core product as the center of their business model, well they're always doing custom software work. How do they cope with the notion of technical debt? How does it not drive them into technical bankruptcy?

    Read the article

  • snmp trap using disman-event mib related issue

    - by jatin bodarya
    notificationEvent ifMtu.1 IF-MIB::ifMtu.1 1.3.6.1.2.1.2.2.1.4.1 monitor -I -u root -s -t -r 18 "Warn: High ipp Usage" -e ifMtu.1 1.3.6.1.2.1.2.2.1.4.1 != The above lines are in my snmpd.conf file which is generating a trap when the condition evaluates to false. My issue is that I want to send "Trap Severity Levels" with it. Is it possible? If so, how? If it isn't is there any other way to send them?

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • Sudo yum seems to fail on CentOS, but works fine after sudo -i

    - by Aron Rotteveel
    I am currently having some trouble with yum through sudo. For some reason, it does not seem to work: aron@graviton [/var/log]# sudo yum clean all There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /usr/lib64/python2.4/lib-dynload/datetime.so: failed to map segment from shared object: Cannot allocate memory Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq The strange thing, however, is that it works fine when I gain root privileges through sudo -i first. Any ideas what might be causing this problem?

    Read the article

  • E3 Booth Babes Display a Painful Lack of Video Game Knowledge [Video]

    - by Jason Fitzpatrick
    If you thought a prerequisite for manning a booth at an electronics expo was a passing knowledge of the electronics and games you were promoting, you were wrong. In the above video Chloe Dykstra puts a set of “booth babes” from the E3 2011 conference to the test by asking them simple questions about video games both new and old. If you’re a gaming fan and you can watch this video without laughing out loud you’ve got an iron will (or you’re shaking your head in disbelief that someone could work a gaming convention and not know the answers to these questions). We won’t lie, we were shaking our head when the one model admitted that she’d worked at GameStop for a year and still didn’t know any of the answers. What questions would you put on list? How about “Finish this sentence: ‘Your Princess is in another…’”, “Dimension?”. 5HP: Booth Babe Edition – E3 2011 [YouTube via Kotaku] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • Windows hangs on Rename, Delete and Move for specific MKV files

    - by Creativehavoc
    I have been googling this issue for a long time, and no full solutions seem to be out there. For certain mkv files, windows hangs when moving, copying, or deleting. These files play fine in a player such as GOM player. System: a fast windows 7, 64 bit box. Results: CPU change is negligible Memory usage rockets up to ~100% In cased of delete or move, "discovering files" dialogue box stays up for a long time Rename simply shows spinning icon until it finishes Action is completed after EXTREMELY long amount of time Memory usage does not return to normal "Fixes:" Disable thumbnail creation (helps for some cases) After move/rename/delete kill explorer with taskmanager, and relaunch to re-claim your memory Even with the thumbnails turned off, the issue persists. I have also tried re-muxing a file, which worked fine, but still resulted in a file with the same above issues.

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • shibboleth: tomcat failing to start IdP listener

    - by HorusKol
    I have installed a Shibboleth Identity Provider as per http://www.edugate.ie/workshop-guides/shibboleth-2-identity-provider-installation-linux-debian-or-ubuntu However, testing only gave me a 404 from Tomcat, and when I checked the Tomcat logs, I saw that the IdP listener was not starting: 10/01/2011 11:25:31 AM org.apache.catalina.startup.HostConfig deployDescriptor INFO: Deploying configuration descriptor idp.xml 10/01/2011 11:25:32 AM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart 10/01/2011 11:25:32 AM org.apache.catalina.core.StandardContext start SEVERE: Context [/idp] startup failed due to previous errors The IdP descriptor file has the following context: <Context docBase="/opt/shibboleth-idp/war/idp.war" privileged="true" antiResourceLocking="false" antiJARLocking="false" unpackWAR="true" /> I have confirmed that the WAR file is located as the Context above specifies - as I have found similar issues from other people where the WAR file was not found. However, the logs posted by those people indicate that the descriptor file was correctly read by Tomcat and their problem was with the WAR file itself. I'm assuming this is some kind of syntax error with the idp.xml, but cannot determine what it might be. Also - setting the Tomcat logging level to FINEST does not provide any additional information in the logs for this error.

    Read the article

  • E-Business Suite - Cloning Basics & AMP Cloning - US

    - by Annemarie Provisero
    ADVISOR WEBCAST: E-Business Suite - Cloning Basics & AMP Cloning - US PRODUCT FAMILY: EBS – ATG - Utilities July 20, 2011 at 17:00 UK / 18:00 CET / 09:00 am Pacific / 10:00 am Mountain / 12:00 Eastern This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Cloning functionality available in the E-Business Suite Release. We are going to talk about the generic Cloning options and will then go into depth about the cloning scenario when using AMP (Applications Management Pack) within the Enterprise Manager. TOPICS WILL INCLUDE: Cloning Overview Rapidclone steps in Details Rapidclone limitations EM Grid Setup with AMP for Cloning Advantages of Cloning with AMP Cloning Procedures available with AMP Monitoring Clone Operation Few things to remember before Cloning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Oracle DB, Oracle ADF, GlassFish, JDeveloper, NetBeans IDE

    - by Geertjan
    Today I started some experiments with Oracle guru Steven Davelaar, who lives about 20 minutes away from my place in Amsterdam by underground. Very convenient. He showed me a bunch of things in JDeveloper, while I showed him a bunch of things in NetBeans IDE. He managed to deploy an ADF application to GlassFish in JDeveloper. And, so far, I failed to do the same thing in NetBeans IDE. Quite a few (around 100) JARs are needed, aside from the question of correctly setting up or importing an ADF application, and we're still figuring out which and who and when and where. And how. And if. And why. Nonetheless, I did manage to get Oracle DB set up in NetBeans IDE, after downloading it from here: http://www.oracle.com/technetwork/products/express-edition/downloads/index.html Here's what it looks like when registered in NetBeans IDE, i.e., notice that I have a cool sample database available:   Data from the above database I managed to display very easily via the various NetBeans code generators in a PrimeFaces application, exactly as has been done many times in demonstrations and tutorials everywhere, i.e., generate JPA entities, then create an EJB, then inject the EJB into a PrimeFaces data table: The next step is to somehow do the same with ADF in NetBeans IDE. I had some trouble with passwords for Oracle DB, the command line (with Steven's help) proved helpful: Wish us luck as we continue our ADF-inspired journey. This blog entry by Shay is also relevant: Deploying Oracle ADF Essentials Applications to Glassfish

    Read the article

  • How to consolidate servers with the not-very-strong infrastructure

    - by Sim
    All, Situation We are in retail industry with about 10 distributors and use Solomon as the standard ERP for all our systems Each distributor has 1 HQ and 5 - 10 branches, each branch has their own server (Windows 2000/XP/2003 + Solomon + another built-in POS system) Everyday, branches has to extract data and send (via email/Skype) to HQ for data consolidation purpose When we first deployed our ERP, the infrastructure (e.g. Internet connection) wasn't reliable enough. That's why we went with the de-centralized model (each branch got their own server) Now, the infrastructure is mature already. And we need to consolidate data more quickly (not from branches -- HQ -- our company but something like HQ -- our company only) Goal We just have Solomon servers in distributor HQ. All the transactions in branches (retrieved from POS) will by synchronized with HQ server directly) There is a backup plan just in case the Internet goes down, or HQ server goes down Question With the above question, could you guys suggests some model for me ? Should we use Terminal services, any other solutions ? Any watchout/suggestions ? Any good article to read 'bout this ? Thanks a lot

    Read the article

  • Simple monitoring utility with up/down statuses of the host's network connectivity and services

    - by Beaming Mel-Bin
    We've looked at many monitoring tools (SolarWinds, Zabbix, Nagios) through out the last 10 years but they never took hold because they are overly complicated. I am willing to try them again or something new at this point but with a much simpler goal: ping to check up and down of host tcp probes to test up and down of service notifications via e-mail web GUI prefer an OSS solution Wanted to know if someone has any recommendations on this. This could be a Windows or Linux application. Preferably without the reqirement of agents. I don't even need SNMP support but that may be nice for expanding once we have the above mentioned bare minimum in place.

    Read the article

  • What happens to remounted data/directories

    - by cauon
    According to suggestions in this post I am trying to improve my system to run better with a Solid State Drive. But regarding to RAMdisks and /etc/fstab usage I have some understanding problems coming up. So let's say I add the following lines to /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,nodiratime,mode=0755 0 0 I know that on startup these locations should now get mounted into the RAM (hopefully). But what happens to the physical space that was mounted on those places before? Is it gone? Will it be back when I edit my /etc/fstab back to the Version without tmpfs? Will the space still be allocated on my SSD in a way that I can't use it for any other data? Sometimes it is suggested to add the following line, too: none /var/cache aufs dirs=/tmp:/var/cache=ro 0 0 What does this actually do? I noticed that /var/cache takes almost 1GB of space on my harddisk. So should i clear the directory before activating this line? (this is related to the former question) This causes me some confusions and I hope you can give me some clarifications. UPDATE I've downloaded a image with 600MB in size into /tmp that is mounted with the tmpfs settings above. Now I wanted to compare the RAM usage before and after the download. I expect the RAM usage to be increased by 600MB after the download. But the System Monitoring Tool showed me no changes at all. How can this be? Does tmpfs work other than I actually expect it to?

    Read the article

  • Blackberry Gmail password change

    - by Highstead
    I've updated my gmail password and so i must update my blackberry password. I tried updating the email password to which i got the following message. Invalid email address or password. Please verify your email address and password. The information you provided is incorrect. If the error persists contact gmail.com (Your email provider). Please try again. I tried again, with what i know the password to be, with password show on. I've also deleted the account and tried to create it. I've tried going to the "Last account activity: XXXX details" menu and signing out all devices. I'm continually getting the above error, but the account activities don't seem to show any sign of a mobile attempt to access my mail account. Has anyone had this issue before and how did you sign it. Thanks in advance.

    Read the article

  • Update Error: Require Installation Of Untrusted Packages

    - by One Zero
    I have googled and found out Updates don't install because of "untrusted packages". It didn't fixed the error. So, how do I fix my GUI update? Updated sudo apt-get update && sudo apt-get upgrade < working The following packages will be upgraded: ambiance-colors radiance-colors 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 5,029 kB of archives. After this operation, 7,614 kB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! ambiance-colors radiance-colors Install these packages without verification [y/N]? Working but when I update from GUI or try to install software from Ubuntu Software Center I got same error as shown in above picture. For every software I have to install it from command line because I got same error when install software from Ubuntu Software Center.

    Read the article

  • How to choose cell to put entity in in an uniform grid used for broad phase collision detection?

    - by nathan
    I'm trying to implement the broad phase of my collision detection algorithm. My game is an arcade game with lot of moving entities in an open space with relatively equivalent sizes. Regarding the above specifications, i decided to use an uniform grid for space partitioning. The problem i have right know is how to efficiently choose in which cells an entity should be added. ATM i'm doing something like this: for (int x = 0; x < gridSize; x++) { for (int y = 0; y < gridSize; y++) { GridCell cell = grid[x][y]; cell.clear(); //remove the previously added entities for (int i = 0; i < entities.size(); i++) { Entity e = entities.get(i); if (cell.isEntityOverlap(e)) { cell.add(e); } } } } The isEntityOverlap is a simple method i added my GridCell class. public boolean isEntityOverlap(Shape s) { return cellArea.intersects(s); } Where cellArea is a Rectangle. cellArea = new Rectangle(x, y, CollisionGrid.CELL_SIZE, CollisionGrid.CELL_SIZE); It works but it's damn slow. What would be a fast way to know all the cells an entity overlaps? Note: by "it works" i mean, the entities are contained in the good cells over the time after movements etc.

    Read the article

  • Displaying device contacts with an indication that the contact is registered to the app

    - by Prasanna Aarthi
    We are developing a mobile app that needs to pick up device contacts, display them and indicate if the contact has already registered with this app. We have our DB in the server and the app fetches data using web services. What will be the best approach to implement the above scenario taking performance into consideration. Option 1: Every time user opens the app,fetch the contacts and send the list of email addresses to the server, check with the registered email ids and return the list of registered users in the contact list. In this approach whenever user opens the particular page, he needs to wait for few seconds to load data, but the contacts will be the latest from the device. Option 2: First time when the user opens the app, fetch contacts ,send the entire list of contacts and save it in the DB, retrieve list of registered users in the contacts then save this to local DB. From now on, data will be fetched from local DB and displayed. When a new user registers in the app, again check with records in central DB and send list of new users who are in your contacts that have registered to your app. This list will be added to local DB. and the process continues. In this case the new contacts added by user will not be updated in the app but retrieval and display of records would be quick. What would be the correct approach? In case there is a better way of doing this, please let me know.

    Read the article

  • chunked response in nginx not working

    - by Dean Hiller
    I ran into this post which shows my problem EXACTLY http://nginx.org/en/docs/faq/chunked_encoding_from_backend.html BUT browsers are using http 1.1 these days so I really don't understand. Our backend is the playframework and I don't mind fixing it but I don't really understand what is not working ESPECIALLY since firefox, safari, chrome ALL download the response just fine with no problems. ONLY when we stick NGINX in the middle do things break and we end up with extra data in our json responses. Any idea how to fix this? as the doc above just seems wrong since we are now on later versions of http PLUS the browsers seem to work just fine. thanks, Dean

    Read the article

  • Calculating the "power" of a player in a "Defend Your Castle" type game

    - by Jesse Emond
    I'm a making a "Defend Your Castle" type game, where each player has a castle and must send units to destroy the opponent's castle. It looks like this (and yeah, this is the actual game, not a quick paint drawing..): Now, I'm trying to implement the AI of the opponent, and I'd like to create 4 different AI levels: Easy, Normal, Hard and Hardcore. I've never made any "serious" AI before and I'd like to create a quite complete one this time. My idea is to calculate a player's "power" score, based on the current health of its castle and the individual "power" score of its units. Then, the AI would just try to keep a score close to the player's one(Easy would stay below it, Normal would stay near it and Hard would try to get above it). But I just don't know how to calculate a player's power score. There are just too many variables to take into account and I don't know how to properly use them to create one significant number(the power level). Could anyone help me out on this one? Here are the variables that should influence a player's power score: Current castle health, the unit's total health, damage, speed and attack range. Also, the player can have increased Income(the money bag), damage(the + Damage) and speed(the + speed)... How could I include them in the score? I'm really stuck here... Or is there an other way that I could implement AI for this type of game? Thanks for your precious time.

    Read the article

  • How can I cull non-visible isometric tiles?

    - by james
    I have a problem which I am struggling to solve. I have a large map of around 100x100 tiles which form an isometric map. The user is able to move the map around by dragging the mouse. I am trying to optimize my game only to draw the visible tiles. So far my code is like this. It appears to be ok in the x direction, but as soon as one tile goes completely above the top of the screen, the entire column disappears. I am not sure how to detect that all of the tiles in a particular column are outside the visible region. double maxTilesX = widthOfScreen/ halfTileWidth + 4; double maxTilesY = heightOfScreen/ halfTileHeight + 4; int rowStart = Math.max(0,( -xOffset / halfTileWidth)) ; int colStart = Math.max(0,( -yOffset / halfTileHeight)); rowEnd = (int) Math.min(mapSize, rowStart + maxTilesX); colEnd = (int) Math.min(mapSize, colStart + maxTilesY); EDIT - I think I have solved my problem, but perhaps not in a very efficient way. I have taken the center of the screen coordinates, determined which tile this corresponds to by converting the coordinates into cartesian format. I then update the entire box around the screen.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >