Search Results

Search found 5542 results on 222 pages for 'cross compiling'.

Page 151/222 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Solution for lightweight LAN peer discovering?

    - by DevilWithin
    I built a library for purely cross-platform programming. My games made with it run fine in Android , Pc, Linux, Mac etc. The networking capabilities are provided by ENET library, therefore all communication between my apps is not TCP or UDP compatible, but only in the custom protocol, even tough its based on the UDP ultimately. I don't think its possible to do what i want with ENET, thats why I ask here for help! Lets say I have the same game running in my Android phone, my laptop and my pc. They are all in the same wifi network, and therefore in a LAN, whether its Wifi hotspot(?) or the household router. I need each of those 3 peers to discover the other two in the network. This is meant only to find the IP of alive apps in the LAN network, to be able to host multiplayer games between them. I can only think of one effective way to do this, UDP broadcast, wait responses, but if that is the solution, i need something small, since its the only purpose of the implementation. Other way could be to try to connect to all IPs in the LAN address subrange, but I don't think the OS would be with me on this one :p Sorry for the long question!

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • Responsive Design: Which Framework Should I Use? CSS3 & HTML5

    - by Jayhal
    I've been looking for a suitable set of HTML5/CSS3 foundation files to start new projects on. I started off piecing together my own files, but I believe I might be better served in finding a solid and fairly compatible(with me) CSS3/HTML5 framework and then tweaking certain things that may not best suit my own process. I'd love to find something that is responsive and that includes aspects focusing on layout, type(hor and vert baselines), form and interface components, cross-browser issues, and preferably built on something other than a just imple css reset, but that does include rebuilding elements consistently across browsers for a clean work slate. Extra features like polyfills or others area great, as is good documentation and examples. So far, off the top of my head I know of, Skeleton 1140 Grid 320 & Up (plus BP) HTML5 Boilerplate 2.0 and Mobile Inuit.css Less Framework Fluir Perkins.Less A few WP themes Are there any great one I don't know about? I work a lot in WP, and something that is easily incorporated (but also stand alone) is ideal. Plugins and wide set feature while maintaining the ability to cut it down when needed(flexibility) is also a big plus, and in par with a faster learning, since I want to start using whatever I find immediately . What are some of the better options you guys might be able to recommend? Systems or scripts, plugins, and other related tools are also welcome, Thanks!

    Read the article

  • How to set an old view? [closed]

    - by xan
    Possible Duplicate: How to revert to GNOME Classic? is it possible to change general view in Ubuntu 12.04 so it was like in Ubuntu 10.04? I really liked old one and really don't like new one. Things that are annoying me: menu on the left with large icons lack of workspaces icons in right bottom corner lack of menu: Applications, Places, System in left top corner menu like: file | edit | view | help resize and red cross is available in all applications at the very top of the pulpit, unless at the very top of the application window I know it may sound crazy, but that's all me, don't like changes despite that I know they are often good. I would be very grateful for any help. I really want old view (exactly like here: http://upload.wikimedia.org/wikipedia/commons/2/2d/UbuntuMaverickDesktop.png) back. By the way, if I installed Ubuntu 12.04 by WUBI from Windows level on separated partition, then what is the easiest way to uninstall it? Can I simply format this partition? I'm wondering because I don't know if it is safe, what about boot loader etc.

    Read the article

  • How to Reap Anticipated ROI in Large-Scale Capital Projects

    - by Sylvie MacKenzie, PMP
    Only a small fraction of companies in asset-intensive industries reliably achieve expected ROI for major capital projects 90 percent of the time, according to a new industry study. In addition, 12 percent of companies see expected ROIs in less than half of their capital projects. The problem: no matter how sophisticated and far-reaching the planning processes are, many organizations struggle to manage risks or reap the expected value from major capital investments. The data is part of the larger survey of companies in oil and gas, mining and metals, chemicals, and utilities industries. The results appear in Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, a comprehensive new report sponsored by Oracle and developed by the Economist Intelligence Unit. Analysts say the shortcomings in large-scale, long-duration capital-investments projects often stem from immature capital-planning processes. The poor decisions that result can lead to significant financial losses and disappointing project benefits, which are particularly harmful to organizations during economic downturns. The report highlights three other important findings. Teaming the right data and people doesn’t guarantee that ROI goals will be achieved. Despite involving cross-functional teams and looking at all the pertinent data, executives are still failing to identify risks and deliver bottom-line results on capital projects. Effective processes are the missing link. Project-planning processes are weakest when it comes to risk management and predicting costs and ROI. Organizations participating in the study said they fail to achieve expected ROI because they regularly experience unexpected events that derail schedules and inflate budgets. But executives believe that using more-robust risk management and project planning strategies will help avoid delays, improve ROI, and more accurately predict the long-term cost of initiatives. Planning for unexpected events is a key to success. External factors, such as changing market conditions and evolving government policies are difficult to forecast precisely, so organizations need to build flexibility into project plans to make it easier to adapt to the changes. The report outlines a series of steps executives can take to address these shortcomings and improve their capital-planning processes. Read the full report or take the benchmarking survey and find out how your organization compares.

    Read the article

  • With the outcome of the Oracle vs Google trial, does that mean Mono is now safe from Microsoft [closed]

    - by Evan Plaice
    According to the an article on ArsTechnica the judge of the case ruled that APIs are not patent-able. He referred to the structure of modules/methods/classes/functions as being like libraries/books/chapters. To patent an API would be putting a patent on thought itself. It's the internal implementations that really matter. With that in mind, Mono (C# clone for Linux/Mac) has always been viewed tentatively because, even though C# and the CLI are ECMA standards, Microsoft holds a patent on the technology. Microsoft holds a covenant not to sue open source developers based on their patents but has maintained the ability to pull the plug on the Mono development team if they felt the project was a threat. With the recent ruling, is Mono finally out of the woods. A firm precedent has been established that patents can't be applied to APIs. From what I understand, none of the Mono implementation is copied verbatim, only the API structure and functionality. It's a topic I have been personally interested in for years now as I have spent a lot of time developing cross-platform C# libraries in MonoDevelop. I acknowledge that this is a controversial topic, if you have opinions that's what commenting is for. Try to keep the answers factual and based on established sources.

    Read the article

  • What technical details should a programmer of a web application consider before making the site public?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application consider before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Is error suppression acceptable in role of logic mechanism?

    - by Rarst
    This came up in code review at work in context of PHP and @ operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. Is such code is being "clever" in a bad way? Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? Edit I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '<li>' . $array['field'] . '</li>'; } vs the following example: echo '<li>' . @$array['field'] . '</li>';

    Read the article

  • Windows Physical Direct Memory Mapping

    - by chrisjleaf
    I'm a bit disappointed there is almost no discussion of this no matter where I look so I guess I'll have to ask. I'm writing a cross platform memory bench marking application which requires direct physical address mapping rather than virtual addressing. EDIT The solution would look something like the Linux/Unix system calls: int fd = open("/dev/mem", O_RDONLY); mmap(NULL, len, PROT_READ, MAP_SHARED, fd, PHYSICAL_ADDRESS_OFFSET); which will require the kernel to either give you a virtual page mapping to the desired physical address or return that it failed. This does require supervisor privileges but that is ok. I have seen a lot of information about shared memory and memory mapped files but all of these reside on disc and are thus not really useful when I'm trying to make a system dependent read. It is very similar to writing an IO driver although I do no need write permissions to the physical address. This site gives an example of how to do it on a driver level using the Windows Driver Kit: NT Insider: Sharing Memory between drivers and applications This solution would probably require Visual Studio which currently I do not have access to. (I have downloaded the WDK api but it complained about my use of GCC for Windows). I'm traditionally a Linux programmer so I'm hoping there might be something really simple I'm missing. Thanks in advance if you know something I don't!

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • R12 Diagnostic Script for Purchasing Encumbrance Issues

    - by Oracle_EBS
    Do you have a Release 12 Purchasing document with an accounting encumbrance error?  Get all the relevant data in one step using the new diagnostic in DOC ID: 1483743.1 -  ‘R12: Diagnostic Script to help troubleshoot Purchasing Encumbrance Issues’.   Avoid the back and forth pinging with support for data collection.   Query the document id in My Oracle Support and add it to your Favorites using the star icon for quick access. The note includes when to use the script and how to use it.  The script will produce a user friendly html output that contains information relevant to encumbrance issues, along with some data validation checks to identify common data corruption issues on your document.  For example in this one diagnostic it will provide information on the following: Ø Cross Product Setup Ø Document Data Dump Ø Funds availability Ø Subledger accounting information Ø GL and AP Invoice Data Ø Debug and Trace This output is ideal for self service, as it provides known issues in the Data Validation section (related to the document) with links to key documentation.   Or the report can be uploaded to support when logging a Service Request. To see more about the diagnostic, attend our September 11, 2012 Webcast ‘Overview of Procurement Patching and New Tools for Issue Resolution’.  Visit Doc ID 1479718.1 to signup.  Note: This topic will not be listed as it has been just added.

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Should I use OpenGL or DX11 for my game?

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... Physics Engine AI Engine Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • is there a checklist that a small website should

    - by Mecon
    I am not a web developer - this would be my first foray. I can do HTML/CSS/Javascript, but never created a website for a company. If anybody is creating a site for small company (expecting some 10-15 static pages), what kinda things would it need to have? I am thinking of the following: Make the eventual owner buy in his/her name: Domain name, Web Hosting package and Email package. Q - DO Web Developers generally ask their clients to buy this stuff and then ask them to share their passwords? OR - Do Web Developers ship the source files to clients so that they can upload it? Create Cross Browser compatible HTML+CSS+javascript pages Add SEO stuff like Meta tags and xml file for crawler Buy professional images from stock website Q - IS there is a best-practice for this step? Add Copyright stuff. Q - ANY idea about how to do this? Add Faceboook widgets, so people can 'like' my website. Register website somewhere so that its searchable from multiple search-engine/yellopages. Q - DOES such a thing exist? Please check my checklist :) and let me know what you think could be missing? Thanks!

    Read the article

  • Need efficient way to keep enemy from getting hit multiple times by same source

    - by TenFour04
    My game's a simple 2D one, but this probably applies to many types of scenarios. Suppose my player has a sword, or a gun that shoots a projectile that can pass through and hit multiple enemies. While the sword is swinging, there is a duration where I am checking for the sword making contact with any enemy on every frame. But once an enemy is hit by that sword, I don't want him to continue getting hit over and over as the sword follows through. (I do want the sword to continue checking whether it is hitting other enemies.) I've thought of a couple different approaches (below), but they don't seem like good ones to me. I'm looking for a way that doesn't force cross-referencing (I don't want the enemy to have to send a message back to the sword/projectile). And I'd like to avoid generating/resetting multiple array lists with every attack. Each time the sword swings it generates a unique id (maybe by just incrementing a global static long). Every enemy keeps a list of id's of swipes or projectiles that have already hit them, so the enemy knows not to get hurt by something multiple times. Downside is that every enemy may have a big list to compare to. So projectiles and sword swipes would have to broadcast their end-of-life to all enemies and cause a search and remove on every enemy's array list. Seems kind of slow. Each sword swipe or projectile keeps its own list of enemies that it has already hit so it knows not to apply damage. Downsides: Have to generate a new list (probably pull from a pool and clear one) every time a sword is swung or a projectile shot. Also, this breaks down modularity, because now the sword has to send a message to the enemy, and the enemy has to send a message back to the sword. Seems to me that two-way streets like this are a great way to create very difficult-to-find bugs.

    Read the article

  • Oracle Partner Days and Oracle Days are coming to an EMEA city near you!

    - by Javier Puerta
    Oracle Partner Days A new round of Oracle Partner Days is coming to a large number of European cities. These events are exclusive for Oracle partners and will deliver to you real Business return on your OPN membership.You will hear the business opportunities coming from the adoption of the entire Oracle stack, the latest products value propositions and related sales strategy and be able to connect directly with Oracle executives and find new business opportunities with other partners in your region.The EMEA Oracle Partner Days are Local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. The first city will be Frankfurt, on Oct. 29. Check the full list to find an Oracle Partner Day in a city near you. Oracle Days Oracle Days will be hosted after Oracle OpenWorld across EMEA, along October and November. By attending an Oracle Day, customers and partners can: Learn about how to leverage the power of the Oracle stack, by hearing customer case studies about successful business transformation, and by following cross-stack solution tracks within the agenda Discuss key issues for business and IT executives in cloud, big data, social, and mobile solutions, and network with peers who are facing the same challenges Meet Oracle experts and watch live demos of new products Get the latest news from Oracle OpenWorld. See full calendar and cities here

    Read the article

  • How do I simplify a 2D game grid for level management while keeping its by-pixel features?

    - by Eric Thoma
    (I cross-posted this from StackOverflow as this seems to be a more appropriate forum. I've looked around a little here and I did not find an answer, so I hope this is not a recurring question.) This is a question dealing with 2D world design. I am playing around by creating a 2D bird's eye view shooter game, and I am looking to make the game sleek and advanced. I hope to be able to write physics so projectiles have momentum and knock-down properties. I am immediately running into the problem of world design. I need a way to have level files that store everything there is about a game. This is easiest by just having a grid of objects. But there are thin-walls and other objects that don't seem to fit into a traditional cell of a grid. I want to be able to fit all these together so I can streamline level design; so I don't have to put in the exact pixel-specific start and end of a wall. There doesn't seem to be an obvious translation from level file to game without forcing myself into a pacman-life scenario, meaning a scenario where the game feels boxy and discrete. There is a contrast between the smoothly (relatively) moving characters and finite jumps in a grid. I would appreciate an answer that would describe implementation options or point me to resources that do. I would also appreciate references to sites that teach game design. The language I am using is Java (although I would love to use C or C++, but I can never find convenient resources in those languages). Thank you for any answers. Please leave any questions in the space below; I will be able to answer them later tonight (28th Nov).

    Read the article

  • The latest version of the EJB 3.2 spec available on java.net project

    - by Marina Vatkina
    If you are not following us on the users alias, here is a quick update. Just before JavaOne, I uploaded the latest version of the EJB 3.2 Core document to the ejb-spec.java.net downloads. If you want to see the detailed changes, download it If you are interested in the high-level list, or would like to know what to look for, this is the list of changes since the previous version (found on the same download page): Specified that the SessionContext object in a the singleton session bean is thread-safe Clarified that the EJB timers distribution and failover rules apply only to persistent timers Clarified that non-persistent timers returned by getTimers and getAllTimers methods are from the same JVM as the caller Fixed section numbering (left over after moving it to its own chapter) in Ch 17 Noted that only 3.0 and 3.1 deployment descriptors are required to be supported in EJB 3.2 Lite for prior versions of the applications Fixes for EJB_SPEC-61 (Ambiguity in EJB lite local view support) and EJB_SPEC-59 (Improve references to the component-defining annotations) JMS/MDB changes: added new standard activation properties and the unique identifier, and rearranged sections for easier navigation Fixed unresolved cross-refs Updated the rule: only local asynchronous session bean invocations are supported in EJB 3.2 Lite Synchronized permissions in the Table with the permissions listed for the EJB Components in the Java EE Platform Specification Table EE.6-2 Specified that during processing of the close() method, the embeddable container cancels all pending asynchronous invocations and non-persistent timers Updated most of the referenced documents to their latest versions Happy reading!

    Read the article

  • Microsoft Lowers Cloud Barrier To Entry

    - by Herve Roggero
    Once in a while, the technology stack changes enough to create a disturbance in the IT industry. Microsoft did just that today and has officially closed the gap with its #1 competitor: Amazon. What is remarkable is that Microsoft is no longer an alternative to Amazon, it is becoming a clear leader in that space. Some of the new features include official support for durable Virtual Machines with high availability (cross-geographic replication), free WebSites to try Azure, MySQL database at no charge, a new distributed low-latency cache feature, Linux support, support with existing VPN hardware for seamless on-premise integration, a new partner ecosystem and much, much more. Amazon had an edge against Windows Azure in the IaaS (Infrastructure as a Service) space, until now. With the latest release from Microsoft Azure, the gap has been filled. In fact, it seems Amazon may now have a gap to fill… This is great news to everyone; it seems that cloud offerings are becoming more standardized with the more mature cloud providers, and the management stack and quality of service of each cloud provider is increasingly becoming the differentiator. With today’s announcements, it is becoming clear that cloud providers are pushing hard to increase their service footprint and lowering typical barriers to entry such as support for open-source operating systems, free trial offers, higher availability, faster deployment times and simpler enterprise integration.

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • Tracking form abandonment

    - by Alec Sanger
    I'm looking for a decent way to track form abandonment. Ideally, I would like to see how many people start filling out a form but do not complete it, as well as the last field that was filled out. The website is a fairly large Wordpress site with quite a few forms. Some of these forms are to register for events, some are for donations, some are for information requests. My first attempt at this was adding a generic jquery that bound functions to all forms on the site. When a form element was blurred, I would trigger a Google Analytics event with the name of the form, the name of the field, and whether or not it was filled. I expected to be able to go to the Event Flow section in Google Analytics and see the flow of these form events, however since there are so many forms and other events occurring on the website, Google wouldn't let me break them out very well. The other issue was the Quform doesn't name their fields anything relevant, and it doesn't look like we can name them ourselves. This results in a lot of ugly form names that don't mean anything without cross-referencing the actual form. Does anybody have any suggestions on how I can achieve more usable form abandonment metrics in a scenario like this?

    Read the article

  • Space Invaders-type game: Keeping the enemies aligned with each other as they turn around?

    - by CorundumGames
    OK, so here's the lowdown of the problem I'm trying to solve. I'm developing a game in PyGame that's a cross between Space Invaders and Columns. I'm trying to make the motion of the enemies similar to that of the aliens in Space Invaders; that is, they're all clustered in a grid, and if even one hits the side of the screen, the entire formation moves down and turns around. However, the motion of these aliens is continuous (as continuous as a monitor can be, anyway), not on a discrete grid like in the original. The enemies are instances of an Enemy class, and in turn they're held by a 2D array in a enemysquadron module (which, if you don't use Python, is in this case essentially a singleton due to the way Python modules work). Inside the Enemy class I have a class-scope velocity vector that is reversed every time an Enemy object touches the edge of the screen. This won't do, though, because as time goes on the enemies just become disorganized and jumbled (i.e. not in a grid as planned). I haven't implemented the Enemies going downward yet, so let's not worry about that right now. Any tips?

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • On installing nvidia drivers on 12.10 I get "Bad return status for module build on kernel: 3.5.0-19-generic (x86_64)"

    - by james
    New Ubuntu user - just recently made the mistake of trying a different nvidia driver. I'd managed to get the last (nvidia-current) one working through software sources a few weeks ago. The other day I tried to cross over to nvidia-experimental-310 and this produced a system error. Swapping back and forth between proprietary drivers now always causes an error and I can't get any of them to work. Installing through the terminal I get this error message every time: Building initial module for 3.5.0-19-generic Error! Bad return status for module build on kernel: 3.5.0-19-generic (x86_64) Consult /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log for more information On rebooting, I end up with the crappy screen resolution and the thick black border around the screen. I use gksudo software-properties-gtk to bring up sources, where I can change back to the nouveau driver, which restores my screen. After that I can't find /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log so I can't tell you what's inside. Any ideas what might be preventing the nvidia driver from installing? SOLUTION FOUND Okay - so I have a workaround. This is what has worked: Upgrade to kernel 3.7.0 as detailed here upgrade to latest version of the nvidia drivers as detailed here No idea what was happening with kernel 3.5.0-19, but this seems to be better. A little slower maybe on boot, but after days of messing around it's nice to have something that works.

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >