Search Results

Search found 6920 results on 277 pages for 'block'.

Page 193/277 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Finding Z given X & Y coordinates on terrain?

    - by mrky
    I need to know what the most efficient way of finding Z given X & Y coordinates on terrain. My terrain is set up as a grid, each grid block consisting of two triangles, which may be flipped in any direction. I want to move game objects smoothly along the floor of the terrain without "stepping." I'm currently using the following method with unexpected results: double mapClass::getZ(double x, double y) { int vertexIndex = ((floor(y))*width*2)+((floor(x))*2); vec3ray ray = {glm::vec3(x, y, 2), glm::vec3(x, y, 0)}; vec3triangle tri1 = { glmFrom(vertices[vertexIndex].v1), glmFrom(vertices[vertexIndex].v2), glmFrom(vertices[vertexIndex].v3) }; vec3triangle tri2 = { glmFrom(vertices[vertexIndex+1].v1), glmFrom(vertices[vertexIndex+1].v2), glmFrom(vertices[vertexIndex+1].v3) }; glm::vec3 intersect; if (!intersectRayTriangle(tri1, ray, intersect)) { intersectRayTriangle(tri2, ray, intersect); } return intersect.z; } intersectRayTriangle() and glmFrom() are as follows: bool intersectRayTriangle(vec3triangle tri, vec3ray ray, glm::vec3 &worldIntersect) { glm::vec3 barycentricIntersect; if (glm::intersectLineTriangle(ray.origin, ray.direction, tri.p0, tri.p1, tri.p2, barycentricIntersect)) { // Convert barycentric to world coordinates double u, v, w; u = barycentricIntersect.x; v = barycentricIntersect.y; w = 1 - (u+v); worldIntersect.x = (u * tri.p0.x + v * tri.p1.x + w * tri.p2.x); worldIntersect.y = (u * tri.p0.y + v * tri.p1.y + w * tri.p2.y); worldIntersect.z = (u * tri.p0.z + v * tri.p1.z + w * tri.p2.z); return true; } else { return false; } } glm::vec3 glmFrom(s_point3f point) { return glm::vec3(point.x, point.y, point.z); } My convenience structures are defined as: struct s_point3f { GLfloat x, y, z; }; struct s_triangle3f { s_point3f v1, v2, v3; }; struct vec3ray { glm::vec3 origin, direction; }; struct vec3triangle { glm::vec3 p0, p1, p2; }; vertices is defined as: std::vector<s_triangle3f> vertices; Basically, I'm trying to get the intersect of a ray (which is positioned at the x, and y coordinates specified facing pointing downwards toward the terrain) and one of the two triangles on the grid. getZ() rarely returns anything but 0. Other times, the numbers it generates seem to be completely off. Am I taking the wrong approach? Can anyone see a problem with my code? Any help or critique is appreciated!

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • Can't get Wireless RT2x00usb driver to work, and can't blacklist it

    - by TheLQ
    After a two year hiatus to Linux, I try it again out again. And then I run into to driver issues... I have an old Linksys WUSB54G v4 Wireless USB Adapter. In previous versions I had to use a combination of Ndiswrapper and Wicd to hope of getting it working. In 10.10, apparently there are built in drivers for it. Unfortunately they don't work. Fails to connect to my WPA network, fails to connect to my open unencrypted network. Wicd fails at "Obtaining IP address" or when using static IPs fails at verifying connectivity to network. Getting fed up I tried the ndiswrapper approach. Installed and configured, but still not working, even when blacklisting the rt2570 module. So for some debugging I added some lines to my /etc/modprobe.d/blacklist.conf file blacklist rt2570 blacklist prism54usb blacklist rt2x00lib blacklist rt2x00usb Restart and find this: lordquackstar@quackbeast:/etc/modprobe.d$ lsmod | grep rt2 rt2500usb 18049 0 rt2x00usb 9779 1 rt2500usb rt2x00lib 27275 2 rt2500usb,rt2x00usb led_class 2633 1 rt2x00lib mac80211 231541 2 rt2x00usb,rt2x00lib cfg80211 144470 2 rt2x00lib,mac80211 Seems to be ignored... Tried this: lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00usb ERROR: Removing 'rt2x00usb': Resource temporarily unavailable lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00lib ERROR: Removing 'rt2x00lib': Resource temporarily unavailable and couldn't connect. Restarted and was back to the same modules loading. Maybe there's something in the log: lordquackstar@quackbeast:/etc/modprobe.d$ tail -n100000 /var/log/syslog | grep rt2 Dec 13 19:01:15 quackbeast kernel: [ 23.698056] Registered led device: rt2500usb-phy0::radio Dec 13 19:01:15 quackbeast kernel: [ 23.698140] Registered led device: rt2500usb-phy0::quality Dec 13 19:01:15 quackbeast kernel: [ 23.701680] usbcore: registered new interface driver rt2500usb Dec 13 19:01:15 quackbeast NetworkManager[855]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Dec 13 19:17:47 quackbeast kernel: [ 23.521759] Registered led device: rt2500usb-phy0::radio Dec 13 19:17:47 quackbeast kernel: [ 23.521824] Registered led device: rt2500usb-phy0::quality Dec 13 19:17:47 quackbeast kernel: [ 23.524740] usbcore: registered new interface driver rt2500usb Dec 13 19:17:47 quackbeast NetworkManager[798]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Seems to be autoloading. So this means that even if I pull it out, remove the module, and get it working, it still won't work when its plugged in all the time. More info: lordquackstar@quackbeast:/etc/modprobe.d$ sudo lshw -C Network *SNIP* *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan0 serial: 00:12:17:9b:f3:1e capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=2.6.35-24-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg USB: lordquackstar@quackbeast:/etc/modprobe.d$ lsusb | grep -i rt Bus 001 Device 003: ID 13b1:000d Linksys WUSB54G v4 802.11g Adapter [Ralink RT2500USB] Any suggestions on how to either fix the rt2x00usb driver or permanently block it from loading? Note that I already have ndiswrapper installed

    Read the article

  • Common Areas For Securing Web Services

    The only way to truly keep a web service secure is to host it on a web server and then turn off the server. In real life no web service is 100% secure but there are methodologies for increasing the security around web services. In order for consumers of a web service they must adhere to the service’s Service-Level Agreement (SLA).  An SLA is a digital contract between a web service and its consumer. This contract defines what methods and protocols must be used to access the web service along with the defined data formats for sending and receiving data through the service. If either part does not abide by the contract then the service will not be accessible for consumption. Common areas for securing web services: Universal Discovery Description Integration  (UDDI) Web Service Description Language  (WSDL) Application Level Network Level “UDDI is a specification for maintaining standardized directories of information about web services, recording their capabilities, location and requirements in a universally recognized format.” (UDDI, 2010) WSDL on the other hand is a standardized format for defining a web service. A WSDL describes the allowable methods for accessing the web service along with what operations it performs. Web services in the Application Level can control access to what data is available by implementing its own security through various methodologies but the most common method is to have a consumer pass in a token along with a system identifier so that they system can validate the users access to any data or actions that they may be requesting. Security restrictions can also be applied to the host web server of the service by restricting access to the site by IP address or login credentials. Furthermore, companies can also block access to a service by using firewall rules and only allowing access to specific services on certain ports coming from specific IP addresses. This last methodology may require consumers to obtain a static IP address and then register it with the web service host so that they will be provide access to the information they wish to obtain. It is important to note that these areas can be secured in any combination based on the security level tolerance dictated by the publisher of the web service. This being said, the bare minimum security implantation must be in the Application Level within the web service itself. Typically I create a security layer within a web services exposed Internet that requires a consumer identifier and a consumer token. This information is then used to authenticate the requesting consumer before the actual request is performed. Refernece:UDDI. (2010). Retrieved 11 13, 2011, from LooselyCoupled.com: http://www.looselycoupled.com/glossary/UDDIService-Level Agreement (SLA). (n.d.). Retrieved 11 13, 2011, from SearchITChannel: http://searchitchannel.techtarget.com/definition/service-level-agreement

    Read the article

  • CAPTCHA blocking for my scraping script?

    - by Surabhil Sergy
    I am working on a scraping project which involves getting web data and parsing them for further use. I have been working using PHP and CURL to make scraping scripts which crawls web data and I make use of either PHP Dom or Simple HTML DOM Parser library for these kinds of projects. On a recent project I encountered some challenges; initially I found the target website blocked my server IP such that the server could not make any successful requests to the site. Understanding these issues as common I bought a set of private proxies and tried to make request calls using them. Though this could get successful response, I noticed the script is getting some kind of blocks after 2-3 continuous requests. On printing and checking the response I could see a pop-up asking for CAPTCHA validation. I could not see any captcha characters to be entered and it also shows an error “input error: invalid referrer”. On examining the source I could see some Google recaptcha scripts within. I’m stuck at this point and I m not able to execute my script. My script is used for gathering data and it needs to go through a large number of pages periodically over the site. But in the current scenario I am not able to proceed with my script. I could see there are some options to overcome these captcha issues and scraping these kinds of sites too are common. I have been checking my script performance and responses over last two months. I could see during first month I was able to execute very large number of requests from a single IP and I was able to get results. Later I get an IP block and used private proxies which could get me some results. Later I am facing now with the captcha trouble. I would appreciate any help or suggestions in this regard. (Often in this kind of questions I used to get a first comment as, ‘Have you asked for prior permission from the target?’ .I haven’t ,but I know there are many sites doing so to get the details out of sites and target sites may not often give access to them. I respect the legality and scraping etiquettes but I would like to know at what point I stuck and how could I overcome that! ) I could provide any supporting information if needed.

    Read the article

  • JavaScript: Code Folding

    - by Petr
    Today I would like to mentioned code folding in the new JavaScript editor support, which is available in the continual builds from our server. It's a basic feature, but was mentioned in a comment under the mentioned post. So you can fold comments and every code block between { and }. The current support allows only methods to be folded. The difference is shown below. In the picture on the left side is the current folding and on the right side the new one.   The code folding can be switched off in the Editor Options (Tools main menu -> Options -> Editor category -> General Tab). In this dialog you can also define which folds should be collapsed by default when you open a file. These options more closely fit Java editor needs, but you can see in the next picture how the options are mapped for JavaScript code.  The Method option folds all functions in the code. Other code blogs are fold through the option Tags and Other Code Blogs.  The documentation comments (starts with /**) are fold through Javadoc Comments and when you check Initial Comment, then all comments that start with /* are folded by default.  The new JavaScript editor also supports custom folds. To add your custom fold, type in two special comments as shown in this example: // <editor-fold> Your code goes here... // </editor-fold> You can define the default description of a collapsed fold by adding a "desc" attribute: // <editor-fold desc="This is my super secret genius code."> Your code goes here... // </editor-fold> You can set a fold to be collapsed by default by adding a "defaultstate" attribute: // <editor-fold defaultstate="collapsed"> Your code goes here... // </editor-fold> There is a code template that helps with writing custom fold comments. The abbreviation for the template is fcom. As I wrote the new JS support is available in the continual builds. Go here for more info.

    Read the article

  • BizTalk Send Ports, Delivery Notification and ACK / NACK messages

    - by Robert Kokuti
    Recently I worked on an orchestration which sent messages out to a Send Port on a 'fire and forget' basis. The idea was that once the orchestration passed the message to the Messagebox, it was left to BizTalk to manage the sending process. Should the send operation fail, the Send Port got suspended, and the orchestration completed asynchronously, regardless of the Send Port success or failure. However, we still wanted to log the sending success, using the ACK / NACK messages. On normal ports, BizTalk generates ACK / NACK messages back to the Messagebox, if the logical port's Delivery Notification property is set to 'Transmitted'. Unfortunately, this setting also causes the orchestration to wait for the send port's result, and should the Send Port fail, the orchestration will also receive a 'DeliveryFailureException' exception. So we may end up with a suspended port and a suspended orchestration - not the outcome wanted here, there was no value in suspending the orchestration in our case. There are a couple of ways to fix this: 1. Catch the DeliveryFailureException  (full type name Microsoft.XLANGs.BaseTypes.DeliveryFailureException) and do nothing in the orchestration's exception block. Although this works, it still slows down the orchestration as the orchestration still has to wait for the outcome of the send port operation. 2. Use a Direct Port instead, and set the ACK request on the message Context, prior passing to the port: msgToSend(BTS.AckRequired) = true; This has to be done in an expression shape, as a Direct logical port does not have Delivery Notification property - make sure to add a reference to Microsoft.BizTalk.GlobalPropertySchemas. Setting this context value in the message will cause the messaging agent to create an appropriate ACK or NACK message after the port execution. The ACK / NACK messages can be caught and logged by dedicated Send Ports, filtering on BTS.AckType value (which is either ACK or NACK). ACK/NACK messages are treated in a special way by BizTalk, and a useful feature is that the original message's context values are copied to the ACK/NACK message context - these can be used for logging the right information. Other useful context properties of the ACK/NACK messages: -  BTS.AckSendPortName can be used to identify the original send port. - BTS.AckOwnerID, aka http://schemas.microsoft.com/BizTalk/2003/system-properties.AckOwnerID - holds the instance ID of the failed Send Port - can be used to resubmit / terminate the instance Someone may ask, can we just turn off the Delivery Notification on a 'normal' port, and set the AckRequired property on the message as for a Direct port. Unfortunately, this does not work - BizTalk seems to remove this property automatically, if the message goes through a port where Delivery Notification is set to None.

    Read the article

  • Impossible to install Ubuntu 10.10 dual boot with Windows 7 on new Acer desktop computer

    - by Don Myers
    My brother has a brand new Acer Desktop with Windows 7. I have done many installs (40+) of Ubuntu starting with 8.10, and have never run into this. I've spent three hours trying to do a dual boot install of 10.10. When you get to the place where you normally would choose to install as a dual boot or overwrite the existing information on the hard drive, that block is just blank. Nothing. No choices even to do a manual partition setup. If you try to go on you get the message "No root file system is defined. Please correct this from the partitioning menu." but there is nothing in the partitioning menu. I tried a good 10.04 disc also. Same thing happens with it. I ran a gparted live cd, and it shows the hard drive as sda with 3 partitions on the original. sda1 is a small partition called PQService. sda2 is another small partition called System Reserved, and GParted says it is the boot partition. sda3 is the main partation with the operating system (Windows 7) and all of the empty space. There is a little unallocated space at the very beginning and very end of the hard drive. If I go to places in the Live CD, it shows a 640 gb hard disk called Acer, but it also shows a 640 gb hard disk called system reserved. They are the same disk. There is just one hard drive. If you click properties in the System Reserved 640 gb, it shows all information as unknown. I had to change the boot order in the bios in order to run the live cd. The hard drive instead of being listed as such is listed as Raid:Raid Ready. Something the way this computer is set up is preventing Ubuntu from being able to identify the hard drive partitions at all to do an install, even if you were not doing a dual boot and just wanted to overwrite Windows. Is this a bug that needs reported? This is a major problem for me and my brother, but also for Ubuntu if new users want to Ubuntu and find they cannot install it.

    Read the article

  • Failed 12.04 installation

    - by Rob Sayer
    I tried installing Ubuntu 12.04 today. Not an upgrade, a new installation. It didn't work. My computer specs: Computer: Compaq presario CQ-104CA OS: Windows 7 Home 64 bit CPU: AMD V140 BIOS: latest Graphics: amd m880g with ati mobility radeon hd 4250 Wireless: atheros ar9285 Internal HD:SATA I wasn't connected to the internet at the time ... I know of a number of people who have installed ubuntu unconnected and just updated later. It seemed to go normally until I got to the part where I chose to install dual boot linux/windows. Then, the screen went black and the following test appeared (I left out the [OK]'s): checking battery state starting crash report submission daemon stating cpu interrupts balancing daemon stopping system V runlevel compatibility starting configure network device security stopping configure network device security stopping cold plug devices stopping log initial device creation starting enable remaining boot-time encrypting devices starting configure network device security starting save udev log and update rules stopping save udev log and update rules stopping enable remaining boot-time encrypted block devices checking for running unattended-upgrades acpid: exiting speech-dispatcher disabled: edit /etc/default/speech-disorder At this point, the CD is ejected. Then nothing. If I press the return key, it boots Windows. I don't think that's what's supposed to happen. Thinking the cd media or dvd drive may have been faulty, I downloaded the .iso again and made a bootable USB stick, as per your instructions. This time there was no cryptic crash screen. It just booted windows. I can't find any log files it may have left. Thinking the amd64 version may have been the wrong one, I tried downloading the x86 version. Same thing, both from cd and usb drive. Note I downloaded both files twice. I doubt it was a corrupted d/l. This is supposed to be a simple, transparent install. I went to the time and trouble of looking up my devices and drivers re ubuntu beforehand, and was prepared to do some configuration, though I know someone who has the same wireless device and his worked righted out of the box. But I spent over 3 hours trying to install it with only the above to show for it.

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

  • Oracle is Sponsoring LinuxCon Europe 2012

    - by Zeynep Koch
    Architecture is amazing in Barcelona but you will also be impressed with Oracle Linux sessions in LinuxCon Europe as well.  Oracle is one of the key sponsors in LinuxCon Europe and we have great sessions to show you why Oracle Linux is best for your "IT architecture"! We also have a booth where you can pick up latest Oracle Linux and Oracle VM DVD Kit and Virtualization for Dummies booklet. Don't forget to visit us at technology showcase Booth #19. Oracle Sessions at LinuxCon Europe 2012:  1. OCFS2: Status and Overview - Lenz Grimmer, Oracle Wednesday November 7, 2012 10:40am - 11:25am Venue: Diamant OCFS2, Oracle's general-purpose shared-disk cluster file system for Linux has come a long way since its development started in 2003. Distributed under the GPL and part of the mainline Linux Kernel, it is also included in Oracle Linux and plays a vital role in products like Oracle VM, Oracle RAC or E-Business Suite. This presentation will provide a general technical overview as well as an update on the latest developments. Attendees will learn about the features and improvements that set OCFS2 apart from other Linux-based cluster file systems, including: Heartbeat implementation: global vs. local heartbeats Storage optimizations: Extent-based Allocations, Hole punching, Reflinks 2. Status of Linux Tracing - Elena Zannoni, Oracle Wednesday November 7, 2012 11:35am - 12:20am Venue: Diamant There have been many developments recently in the Linux tracing area. The tracing infrastructure in the kernel is getting more robust, with  the recent introduction of uprobes to allow the implementation of user  space tracing, and new features of perf. There are many tracing tools to choose from, including the newest kid on the block, DTrace for Linux.  This talk will take the audience through the main tracing facilities  available today whether more tightly integrated with the kernel code, or maintained stand alone. 3. MySQL Security Model and Pluggable Authentication - Kristofer Pettersson, Oracle Wednesday November 7, 2012 1:50pm - 2:35pm Venue: Diamant With an increasing security awareness among web and cloud developers, knowing how to secure your database from unauthorized or malicious access has become important. This talk explains the MySQL security model, pluggable authentication, new auditing features and rounds off with some pointers on how to securely integrate your database into your Linux web stack. We look forward to seeing you in Barcelona, Spain on November 5-9, 2012. Register today 

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar’s Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a ‘perfect storm’ of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one’s work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven’t far more qualified and established commentators, MVPs and so on, already said it? While it’s a good idea to pick reasonably fresh and interesting topics, it’s more important not to let such fears lead to writer’s block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn’t always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what’s involved in getting an application that is ‘done’ ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we’d love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to ‘throw your hat into the ring’, drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Lubuntu wireless issue with Broadcom chipset

    - by Variant Web Solutions
    I'm a web dev that just started a new venture in buying wiped laptops in bulk and selling them at a low cost to lower income families that want a laptop that will simply preform and not become virus ridden and require constant maintenance, so naturally I opted for a linux distro and after some research, lubuntu was my top pick. I'm not a stranger to linux as all of my servers for my web dev business are linux, but I am however new to L/Ubuntu and I'm having some issues with both wifi (broadcom chipsets) on the 20 or so dells that I have right now, D800's, D810's and E5400's. Not sure if you can point me in a solid direction, I've scoured (and implemented) the suggestions on ask ubuntu and still coming up short. On one of the e5400's ( though they all seem to suffer the same errors) I got the following: [code]dell-latitude-e5400@dell-Latitude-E5400:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. dell-latitude-e5400@dell-Latitude-E5400:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 02:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 02:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 02:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5761e Gigabit Ethernet PCIe (rev 10) dell-latitude-e5400@dell-Latitude-E5400:~$ rfkill Usage: rfkill [options] command Options: --version show version (0.5-1ubuntu1 (Ubuntu)) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm nfc dell-latitude-e5400@dell-Latitude-E5400:~$ [/code]

    Read the article

  • iwlwifi on lenovo z570 disabled by hardware switch

    - by Kevin Gallagher
    It was working fine with windows 7. The hardware switch is not disabled. I've toggled it back and forth dozens of times. The wifi light never turns on and it always lists as hardware disabled. I have the latest updates installed. I've been searching for solutions, but none of them seem to work for me. I've tried removing acer-wmi. I've tried setting 11n_disable=1. I've tried resetting the bios. I've tried using rfkill to unblock (only removes soft block). I've rebooted dozens of times. The wifi light turns off as soon as grub loads. Edit: I have a usb edimax wireless nic. It shows hardware disabled as well (although rfkill lists as unblocked). If I unload iwlwifi the usb nic works fine. uname -a `Linux xxx-Ideapad-Z570 3.2.0-55-generic #85-Ubuntu SMP Wed Oct 2 12:29:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linu`x rfkill list 19: phy18: Wireless LAN Soft blocked: no Hard blocked: yes dmesg [43463.022996] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [43463.023002] Copyright(c) 2003-2011 Intel Corporation [43463.023107] iwlwifi 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [43463.023190] iwlwifi 0000:03:00.0: setting latency timer to 64 [43463.023253] iwlwifi 0000:03:00.0: pci_resource_len = 0x00002000 [43463.023257] iwlwifi 0000:03:00.0: pci_resource_base = ffffc900057c8000 [43463.023261] iwlwifi 0000:03:00.0: HW Revision ID = 0x0 [43463.023797] iwlwifi 0000:03:00.0: irq 43 for MSI/MSI-X [43463.024013] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [43463.024250] iwlwifi 0000:03:00.0: L1 Enabled; Disabling L0S [43463.045496] iwlwifi 0000:03:00.0: device EEPROM VER=0x15d, CALIB=0x6 [43463.045501] iwlwifi 0000:03:00.0: Device SKU: 0X50 [43463.045504] iwlwifi 0000:03:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [43463.045542] iwlwifi 0000:03:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [43463.045744] iwlwifi 0000:03:00.0: RF_KILL bit toggled to disable radio. [43463.047652] iwlwifi 0000:03:00.0: loaded firmware version 39.31.5.1 build 35138 [43463.047823] Registered led device: phy18-led [43463.047895] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [43463.048037] ieee80211 phy18: Selected rate control algorithm 'iwl-agn-rs' [43463.055533] ADDRCONF(NETDEV_UP): wlan0: link is not ready nm-tool State: connected (global) - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: unavailable Default: no HW Address: 74:E5:0B:4A:9F:C2 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points lshw -C network *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 74:e5:0b:4a:9f:c2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-55-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:43 memory:f1500000-f1501fff lspci 03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak]

    Read the article

  • No GRUB Screen or recovery mode on Boot after 12.04 Upgrade

    - by Nick
    I tried the live boot CD and boot-repair, also loaded the Desktop install CD, and it looks like all partitions check out OK. However, when I try to boot Linux (the only bootable partition on the computer) I get a blank screen. Every so often the screen give me something akin to: Assuming write through cache Asking for cache data failed it appears to start booting, then hangs. Ctrl+Alt+Delete shuts down the machine The last message during boot is "STarting TiMidity++ ALSA midi emulation... [OK]" I used boot-repair to generate a boot info report. One thing looks odd to me- it reports a missing core.img on /dev/sda1. Here is the full info: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info August 2nd 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. = Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda1 and looks at sector 18406911 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/extlinux/extlinux.conf /boot/grub/core.img sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 307,339,514 307,339,452 83 Linux /dev/sda2 307,339,515 312,576,704 5,237,190 5 Extended /dev/sda5 307,339,578 312,576,704 5,237,127 82 Linux swap / Solaris Drive: sdb _______________________________________ Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 625,142,447 625,140,400 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 11b4d633-7863-40b2-a6ca-da5f82c3ad0b ext4 /dev/sda5 cb8d65f4-8cf9-4088-b804-e3dea2151033 swap /dev/sdb1 349E7C109E7BC8BE ntfs Personal1 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sdb1 /media/Personal1 fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sr0 /live/image iso9660 (ro,noatime) ...(a bunch of config file info- let me know if anyone wants to see it!) But usually I just get "Cannot Display This Video Mode", which I know means the video output is not usable by the monitor. I'm looking for a way to get into a recovery mode.I'd really like to avoid wiping the drive. Any thoughts?

    Read the article

  • Would someone please explain Octree Collisions to me?

    - by A-Type
    I've been reading everything I can find on the subject and I feel like the pieces are just about to fall into place, but I just can't quite get it. I'm making a space game, where collisions will occur between planets, ships, asteroids, and the sun. Each of these objects can be subdivided into 'chunks', which I have implemented to speed up rendering (the vertices can and will change often at runtime, so I've separated the buffers). These subdivisions also have bounding primitives to test for collision. All of these objects are made of blocks (yeah, it's that kind of game). Blocks can also be tested for rough collisions, though they do not have individual bounding primitives for memory reasons. I think the rough testing seems to be sufficient, though. So, collision needs to be fairly precise; at block resolution. Some functions rely on two blocks colliding. And, of course, attacking specific blocks is important. Now what I am struggling with is filtering my collision pairs. As I said, I've read a lot about Octrees, but I'm having trouble applying it to my situation as many tutorials are vague with very little code. My main issues are: Are Octrees recalculated each frame, or are they stored in memory and objects are shuffled into different divisions as they move? Despite all my reading I still am not clear on this... the vagueness of it all has been frustrating. How far do Octrees subdivide? Planets in my game are quite large, while asteroids are smaller. Do I subdivide to the size of the planet, or asteroid (where planet is in multiple divisions)? Or is the limit something else entirely, like number of elements in the division? Should I load objects into the octrees as 'chunks' or in the whole, then break into chunks later? This could be specific to my implementation, I suppose. I was going to ask about how big my root needed to be, but I did manage to find this question, and the second answer seems sufficient for me. I'm afraid I don't really get what he means by adding new nodes and doing subdivisions upon adding new objects, probably because I'm confused about whether the tree is maintained in memory or recalculated per-frame.

    Read the article

  • Circle-Rectangle collision in a tile map game

    - by furiousd
    I am making a 2D tile map based putt-putt game. I have collision detection working between the ball and the walls of the map, although when the ball collides at the meeting point between 2 tiles I offset it by 0.5 so that it doesn't get stuck in the wall. This aint a huge issue though. if(y % 20 == 0) { y+=0.5; } if(x % 20 == 0) { x+=0.5; } Collisions work as follows Find the closest point between each tile and the center of the ball If distance(ball_x, ball_y, close_x, close_y) <= ball_radius and the closest point belongs to a solid object, collision has occured Invert X/Y speed according to side of object collided with The next thing I tried to do was implement floating blocks in the middle of the map for the ball to bounce off of. When a ball collides with a corner of the block, it gets stuck in it. So I changed my determineRebound() function to treat corners as if they were circles. Here's that functon: `i and j are indexes of the solid object in the 2d map array. x & y are centre point of ball.` void determineRebound(int _i, int _j) { if(y > _i*tile_w && y < _i*tile_w + tile_w) { //Not a corner xs*=-1; } else if(x > _j*tile_w && x < _j*tile_w + tile_w) { //Not a corner ys*=-1; } else { //Corner float nx = x - close_x; float ny = y - close_y; float len = sqrt(nx * nx + ny * ny); nx /= len; ny /= len; float projection = xs * nx + ys * ny; xs -= 2 * projection * nx; ys -= 2 * projection * ny; } } This is where things have gotten messy. Collisions with 'floating' corners work fine, but now when the ball collides near the meeting point of 2 tiles, it detects a corner collision and does not rebound as expected. I'm a bit in over my head at this point. I guess I'm wondering if I'm going about making this sort of game in the right way. Is a 2d tile map the way to go? If so, is there a problem with my collision logic and where am I going wrong? Any advice/feedback would be great.

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • 2D game - Missile shooting problem on Android

    - by Niksa
    Hello, I have to make a tank that sits still but moves his turret and shoots missiles. As this is my first Android application ever and I haven't done any game development either, I've come across a few problems... Now, I did the tank and the moving turret once I read the Android tutorial for the LunarLander sample code. So this code is based on the LunarLander code. But I'm having trouble doing the missile firing then SPACE button is being pressed. private void doDraw(Canvas canvas) { canvas.drawBitmap(backgroundImage, 0, 0, null); // draws the tank canvas.drawBitmap(tank, x_tank, y_tank, new Paint()); // draws and rotates the tank turret canvas.rotate((float) mHeading, (float) x_turret + mTurretWidth, y_turret); canvas.drawBitmap(turret, x_turret, y_turret, new Paint()); // draws the grenade that is a regular circle from ShapeDrawable class bullet.setBounds(x_bullet, y_bullet, x_bullet + width, y_bullet + height); bullet.draw(canvas); } UPDATE GAME method private void updateGame() throws InterruptedException { long now = System.currentTimeMillis(); if (mLastTime > now) return; double elapsed = (now - mLastTime) / 1000.0; mLastTime = now; // dUP and dDown, rotates the turret from 0 to 75 degrees. if (dUp) mHeading += 1 * (PHYS_SLEW_SEC * elapsed); if (mHeading >= 75) mHeading = 75; if (dDown) mHeading += (-1) * (PHYS_SLEW_SEC * elapsed); if (mHeading < 0) mHeading = 0; if (dSpace){ // missile Logic, a straight trajectorie for now x_bullet -= 1; y_bullet -= 1; //doesn't work, has to be updated every pixel or what? } boolean doKeyDown(int keyCode, KeyEvent msg) { boolean handled = false; synchronized (mSurfaceHolder) { if (keyCode == KeyEvent.KEYCODE_SPACE){ dSpace = true; handled = true; } if (keyCode == KeyEvent.KEYCODE_DPAD_UP){ dUp = true; handled = true; } if (keyCode == KeyEvent.KEYCODE_DPAD_DOWN){ dDown = true; handled = true; } return handled; } } } a method run that runs the game... public void run() { while (mRun) { Canvas c = null; try { c = mSurfaceHolder.lockCanvas(null); synchronized (mSurfaceHolder) { if (mMode == STATE_RUNNING) updateGame(); doDraw(c); } } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { mSurfaceHolder.unlockCanvasAndPost(c); } } } } So the question would be, how do I make that the bullet is fired on a single SPACE key press from the turret to the end of the screen? Could you help me out here, I seem to be in the dark here... Thanks, Niksa

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >