Search Results

Search found 3867 results on 155 pages for 'cross forest'.

Page 103/155 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Is error suppression acceptable in role of logic mechanism?

    - by Rarst
    This came up in code review at work in context of PHP and @ operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. Is such code is being "clever" in a bad way? Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? Edit I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '<li>' . $array['field'] . '</li>'; } vs the following example: echo '<li>' . @$array['field'] . '</li>';

    Read the article

  • Windows Physical Direct Memory Mapping

    - by chrisjleaf
    I'm a bit disappointed there is almost no discussion of this no matter where I look so I guess I'll have to ask. I'm writing a cross platform memory bench marking application which requires direct physical address mapping rather than virtual addressing. EDIT The solution would look something like the Linux/Unix system calls: int fd = open("/dev/mem", O_RDONLY); mmap(NULL, len, PROT_READ, MAP_SHARED, fd, PHYSICAL_ADDRESS_OFFSET); which will require the kernel to either give you a virtual page mapping to the desired physical address or return that it failed. This does require supervisor privileges but that is ok. I have seen a lot of information about shared memory and memory mapped files but all of these reside on disc and are thus not really useful when I'm trying to make a system dependent read. It is very similar to writing an IO driver although I do no need write permissions to the physical address. This site gives an example of how to do it on a driver level using the Windows Driver Kit: NT Insider: Sharing Memory between drivers and applications This solution would probably require Visual Studio which currently I do not have access to. (I have downloaded the WDK api but it complained about my use of GCC for Windows). I'm traditionally a Linux programmer so I'm hoping there might be something really simple I'm missing. Thanks in advance if you know something I don't!

    Read the article

  • is there a checklist that a small website should

    - by Mecon
    I am not a web developer - this would be my first foray. I can do HTML/CSS/Javascript, but never created a website for a company. If anybody is creating a site for small company (expecting some 10-15 static pages), what kinda things would it need to have? I am thinking of the following: Make the eventual owner buy in his/her name: Domain name, Web Hosting package and Email package. Q - DO Web Developers generally ask their clients to buy this stuff and then ask them to share their passwords? OR - Do Web Developers ship the source files to clients so that they can upload it? Create Cross Browser compatible HTML+CSS+javascript pages Add SEO stuff like Meta tags and xml file for crawler Buy professional images from stock website Q - IS there is a best-practice for this step? Add Copyright stuff. Q - ANY idea about how to do this? Add Faceboook widgets, so people can 'like' my website. Register website somewhere so that its searchable from multiple search-engine/yellopages. Q - DOES such a thing exist? Please check my checklist :) and let me know what you think could be missing? Thanks!

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • R12 Diagnostic Script for Purchasing Encumbrance Issues

    - by Oracle_EBS
    Do you have a Release 12 Purchasing document with an accounting encumbrance error?  Get all the relevant data in one step using the new diagnostic in DOC ID: 1483743.1 -  ‘R12: Diagnostic Script to help troubleshoot Purchasing Encumbrance Issues’.   Avoid the back and forth pinging with support for data collection.   Query the document id in My Oracle Support and add it to your Favorites using the star icon for quick access. The note includes when to use the script and how to use it.  The script will produce a user friendly html output that contains information relevant to encumbrance issues, along with some data validation checks to identify common data corruption issues on your document.  For example in this one diagnostic it will provide information on the following: Ø Cross Product Setup Ø Document Data Dump Ø Funds availability Ø Subledger accounting information Ø GL and AP Invoice Data Ø Debug and Trace This output is ideal for self service, as it provides known issues in the Data Validation section (related to the document) with links to key documentation.   Or the report can be uploaded to support when logging a Service Request. To see more about the diagnostic, attend our September 11, 2012 Webcast ‘Overview of Procurement Patching and New Tools for Issue Resolution’.  Visit Doc ID 1479718.1 to signup.  Note: This topic will not be listed as it has been just added.

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Oracle Partner Days and Oracle Days are coming to an EMEA city near you!

    - by Javier Puerta
    Oracle Partner Days A new round of Oracle Partner Days is coming to a large number of European cities. These events are exclusive for Oracle partners and will deliver to you real Business return on your OPN membership.You will hear the business opportunities coming from the adoption of the entire Oracle stack, the latest products value propositions and related sales strategy and be able to connect directly with Oracle executives and find new business opportunities with other partners in your region.The EMEA Oracle Partner Days are Local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. The first city will be Frankfurt, on Oct. 29. Check the full list to find an Oracle Partner Day in a city near you. Oracle Days Oracle Days will be hosted after Oracle OpenWorld across EMEA, along October and November. By attending an Oracle Day, customers and partners can: Learn about how to leverage the power of the Oracle stack, by hearing customer case studies about successful business transformation, and by following cross-stack solution tracks within the agenda Discuss key issues for business and IT executives in cloud, big data, social, and mobile solutions, and network with peers who are facing the same challenges Meet Oracle experts and watch live demos of new products Get the latest news from Oracle OpenWorld. See full calendar and cities here

    Read the article

  • Need efficient way to keep enemy from getting hit multiple times by same source

    - by TenFour04
    My game's a simple 2D one, but this probably applies to many types of scenarios. Suppose my player has a sword, or a gun that shoots a projectile that can pass through and hit multiple enemies. While the sword is swinging, there is a duration where I am checking for the sword making contact with any enemy on every frame. But once an enemy is hit by that sword, I don't want him to continue getting hit over and over as the sword follows through. (I do want the sword to continue checking whether it is hitting other enemies.) I've thought of a couple different approaches (below), but they don't seem like good ones to me. I'm looking for a way that doesn't force cross-referencing (I don't want the enemy to have to send a message back to the sword/projectile). And I'd like to avoid generating/resetting multiple array lists with every attack. Each time the sword swings it generates a unique id (maybe by just incrementing a global static long). Every enemy keeps a list of id's of swipes or projectiles that have already hit them, so the enemy knows not to get hurt by something multiple times. Downside is that every enemy may have a big list to compare to. So projectiles and sword swipes would have to broadcast their end-of-life to all enemies and cause a search and remove on every enemy's array list. Seems kind of slow. Each sword swipe or projectile keeps its own list of enemies that it has already hit so it knows not to apply damage. Downsides: Have to generate a new list (probably pull from a pool and clear one) every time a sword is swung or a projectile shot. Also, this breaks down modularity, because now the sword has to send a message to the enemy, and the enemy has to send a message back to the sword. Seems to me that two-way streets like this are a great way to create very difficult-to-find bugs.

    Read the article

  • The latest version of the EJB 3.2 spec available on java.net project

    - by Marina Vatkina
    If you are not following us on the users alias, here is a quick update. Just before JavaOne, I uploaded the latest version of the EJB 3.2 Core document to the ejb-spec.java.net downloads. If you want to see the detailed changes, download it If you are interested in the high-level list, or would like to know what to look for, this is the list of changes since the previous version (found on the same download page): Specified that the SessionContext object in a the singleton session bean is thread-safe Clarified that the EJB timers distribution and failover rules apply only to persistent timers Clarified that non-persistent timers returned by getTimers and getAllTimers methods are from the same JVM as the caller Fixed section numbering (left over after moving it to its own chapter) in Ch 17 Noted that only 3.0 and 3.1 deployment descriptors are required to be supported in EJB 3.2 Lite for prior versions of the applications Fixes for EJB_SPEC-61 (Ambiguity in EJB lite local view support) and EJB_SPEC-59 (Improve references to the component-defining annotations) JMS/MDB changes: added new standard activation properties and the unique identifier, and rearranged sections for easier navigation Fixed unresolved cross-refs Updated the rule: only local asynchronous session bean invocations are supported in EJB 3.2 Lite Synchronized permissions in the Table with the permissions listed for the EJB Components in the Java EE Platform Specification Table EE.6-2 Specified that during processing of the close() method, the embeddable container cancels all pending asynchronous invocations and non-persistent timers Updated most of the referenced documents to their latest versions Happy reading!

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • How do I simplify a 2D game grid for level management while keeping its by-pixel features?

    - by Eric Thoma
    (I cross-posted this from StackOverflow as this seems to be a more appropriate forum. I've looked around a little here and I did not find an answer, so I hope this is not a recurring question.) This is a question dealing with 2D world design. I am playing around by creating a 2D bird's eye view shooter game, and I am looking to make the game sleek and advanced. I hope to be able to write physics so projectiles have momentum and knock-down properties. I am immediately running into the problem of world design. I need a way to have level files that store everything there is about a game. This is easiest by just having a grid of objects. But there are thin-walls and other objects that don't seem to fit into a traditional cell of a grid. I want to be able to fit all these together so I can streamline level design; so I don't have to put in the exact pixel-specific start and end of a wall. There doesn't seem to be an obvious translation from level file to game without forcing myself into a pacman-life scenario, meaning a scenario where the game feels boxy and discrete. There is a contrast between the smoothly (relatively) moving characters and finite jumps in a grid. I would appreciate an answer that would describe implementation options or point me to resources that do. I would also appreciate references to sites that teach game design. The language I am using is Java (although I would love to use C or C++, but I can never find convenient resources in those languages). Thank you for any answers. Please leave any questions in the space below; I will be able to answer them later tonight (28th Nov).

    Read the article

  • What Counts for A DBA: Observant

    - by drsql
    When walking up to the building where I work, I can see CCTV cameras placed here and there for monitoring access to the building. We are required to wear authorization badges which could be checked at any time. Do we have enemies?  Of course! No one is 100% safe; even if your life is a fairy tale, there is always a witch with an apple waiting to snack you into a thousand years of slumber (or at least so I recollect from elementary school.) Even Little Bo Peep had to keep a wary lookout.    We nerdy types (or maybe it was just me?) generally learned on the school playground to keep an eye open for unprovoked attack from simpler, but more muscular souls, and take steps to avoid messy confrontations well in advance. After we’d apprehensively negotiated adulthood with varying degrees of success, these skills of watching for danger, and avoiding it,  translated quite well to the technical careers so many of us were destined for. And nowhere else is this talent for watching out for irrational malevolence so appropriate as in a career as a production DBA.   It isn’t always active malevolence that the DBA needs to watch out for, but the even scarier quirks of common humanity.  A large number of the issues that occur in the enterprise happen just randomly or even just one time ever in a spurious manner, like in the case where a person decided to download the entire MSDN library of software, cross join every non-indexed billion row table together, and simultaneously stream the HD feed of 5 different sporting events, making the network access slow while the corporate online sales just started. The decent DBA team, like the going, gets tough under such circumstances. They spring into action, checking all of the sources of active information, observes the issue is no longer happening now, figures that either it wasn’t the database’s fault and that the reboot of the whatever device on the network fixed the problem.  This sort of reactive support is good, and will be the initial reaction of even excellent DBAs, but it is not the end of the story if you really want to know what happened and avoid getting called again when it isn’t even your fault.   When fires start raging within the corporate software forest, the DBA’s instinct is to actively find a way to douse the flames and get back to having no one in the company have any idea who they are.  Even better for them is to find a way of killing a potential problem while the fires are small, long before they can be classified as raging. The observant DBA will have already been monitoring the server environment for months in advance.  Most troubles, such as disk space and security intrusions, can be predicted and dealt with by alerting systems, whereas other trouble can come out of the blue and requires a skill of observing ongoing conditions and noticing inexplicable changes that could signal an emerging problem.  You can’t automate the DBA, because the bankable skill of a DBA is in detecting the early signs of unexpected problems, and working out how to deal with them before anyone else notices them.    To achieve this, the DBA will check the situation as it is currently happening,  and in many cases is likely to have been the person who submitted the problem to the level 1 support person in the first place, just to let the support team know of impending issues (always well received, I tell you what!). Database and host computer settings, configurations, and even critical data might be profiled and captured for later comparisons. He’ll use Monitoring tools, built-in, commercial (Not to be too crassly commercial or anything, but there is one such tool is SQL Monitor) and lots of homebrew monitoring tools to monitor for problems and changes in the server environment.   You will know that you have it right when a support call comes in and you can look at your monitoring tools and quickly respond that “response time is well within the normal range, the query that supports the failing interface works perfectly and has actually only been called 67% as often as normal, so I am more than willing to help diagnose the problem, but it isn’t the database server’s fault and is probably a client or networking slowdown causing the interface to be used less frequently than normal.” And that is the best thing for any DBA to observe…

    Read the article

  • Microsoft Lowers Cloud Barrier To Entry

    - by Herve Roggero
    Once in a while, the technology stack changes enough to create a disturbance in the IT industry. Microsoft did just that today and has officially closed the gap with its #1 competitor: Amazon. What is remarkable is that Microsoft is no longer an alternative to Amazon, it is becoming a clear leader in that space. Some of the new features include official support for durable Virtual Machines with high availability (cross-geographic replication), free WebSites to try Azure, MySQL database at no charge, a new distributed low-latency cache feature, Linux support, support with existing VPN hardware for seamless on-premise integration, a new partner ecosystem and much, much more. Amazon had an edge against Windows Azure in the IaaS (Infrastructure as a Service) space, until now. With the latest release from Microsoft Azure, the gap has been filled. In fact, it seems Amazon may now have a gap to fill… This is great news to everyone; it seems that cloud offerings are becoming more standardized with the more mature cloud providers, and the management stack and quality of service of each cloud provider is increasingly becoming the differentiator. With today’s announcements, it is becoming clear that cloud providers are pushing hard to increase their service footprint and lowering typical barriers to entry such as support for open-source operating systems, free trial offers, higher availability, faster deployment times and simpler enterprise integration.

    Read the article

  • Space Invaders-type game: Keeping the enemies aligned with each other as they turn around?

    - by CorundumGames
    OK, so here's the lowdown of the problem I'm trying to solve. I'm developing a game in PyGame that's a cross between Space Invaders and Columns. I'm trying to make the motion of the enemies similar to that of the aliens in Space Invaders; that is, they're all clustered in a grid, and if even one hits the side of the screen, the entire formation moves down and turns around. However, the motion of these aliens is continuous (as continuous as a monitor can be, anyway), not on a discrete grid like in the original. The enemies are instances of an Enemy class, and in turn they're held by a 2D array in a enemysquadron module (which, if you don't use Python, is in this case essentially a singleton due to the way Python modules work). Inside the Enemy class I have a class-scope velocity vector that is reversed every time an Enemy object touches the edge of the screen. This won't do, though, because as time goes on the enemies just become disorganized and jumbled (i.e. not in a grid as planned). I haven't implemented the Enemies going downward yet, so let's not worry about that right now. Any tips?

    Read the article

  • Tracking form abandonment

    - by Alec Sanger
    I'm looking for a decent way to track form abandonment. Ideally, I would like to see how many people start filling out a form but do not complete it, as well as the last field that was filled out. The website is a fairly large Wordpress site with quite a few forms. Some of these forms are to register for events, some are for donations, some are for information requests. My first attempt at this was adding a generic jquery that bound functions to all forms on the site. When a form element was blurred, I would trigger a Google Analytics event with the name of the form, the name of the field, and whether or not it was filled. I expected to be able to go to the Event Flow section in Google Analytics and see the flow of these form events, however since there are so many forms and other events occurring on the website, Google wouldn't let me break them out very well. The other issue was the Quform doesn't name their fields anything relevant, and it doesn't look like we can name them ourselves. This results in a lot of ugly form names that don't mean anything without cross-referencing the actual form. Does anybody have any suggestions on how I can achieve more usable form abandonment metrics in a scenario like this?

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • monitor height differences & the mouse going off screen

    - by fastmultiplication
    In ubuntu 10.10 I have a dual monitor setup. I have an nVidia graphics card and am using twinview. One of the monitors is 1024 pixels high and the other is 900. In the monitor configuration screen & in real life, I have them set up side by side, 1024 on the left. The result of this is that when I am on the bottom of the left monitor and move the mouse to the right, it goes into the hidden area below the right monitor's visible area. It seems like it would make a lot more sense for it to be bumped up to the bottom of the right monitor - since one almost never wants to move the mouse into an area of the screen that doesn't show up. And, systems I have used before have been set up that way. How can I set this up? I am not interested in lists of window managers for ubuntu; I would like to know the identity of a particular WM or set of steps I can take to solves the particular problem I have outlined above. Thanks! EDIT: I changed to use two seperate X window monitors, and set them up relatively positioned so that just the corner touches and the mouse can cross there, so the difference in heights doesn't matter.

    Read the article

  • On installing nvidia drivers on 12.10 I get "Bad return status for module build on kernel: 3.5.0-19-generic (x86_64)"

    - by james
    New Ubuntu user - just recently made the mistake of trying a different nvidia driver. I'd managed to get the last (nvidia-current) one working through software sources a few weeks ago. The other day I tried to cross over to nvidia-experimental-310 and this produced a system error. Swapping back and forth between proprietary drivers now always causes an error and I can't get any of them to work. Installing through the terminal I get this error message every time: Building initial module for 3.5.0-19-generic Error! Bad return status for module build on kernel: 3.5.0-19-generic (x86_64) Consult /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log for more information On rebooting, I end up with the crappy screen resolution and the thick black border around the screen. I use gksudo software-properties-gtk to bring up sources, where I can change back to the nouveau driver, which restores my screen. After that I can't find /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log so I can't tell you what's inside. Any ideas what might be preventing the nvidia driver from installing? SOLUTION FOUND Okay - so I have a workaround. This is what has worked: Upgrade to kernel 3.7.0 as detailed here upgrade to latest version of the nvidia drivers as detailed here No idea what was happening with kernel 3.5.0-19, but this seems to be better. A little slower maybe on boot, but after days of messing around it's nice to have something that works.

    Read the article

  • Connecting two monitors at the corners

    - by fastmultiplication
    I am using two separate X screens on two monitors and I would like them to be connected at the lower right corner. That is, if you move to the lower right corner of screen0 the mouse should appear at the upper left of screen1. I do not want an entire edge of each monitor to be permeable to the mouse. I modified the xorg.conf file like so: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" Relative "Screen0" 1200 1000 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection (screen 0's resolution is 1280x1024) so there is a bit of overlap. However whenever I move the mouse to the bottom of screen0, the pointer appears at the upper left of screen1. And, the entire top of screen1 is permeable to the mouse and brings it to the lower left corner of screen0. I have tried various numbers in following the "relative" statement - if I put 1280 1024 the mouse does not cross over at all. If I use 1280 1023 the entire right side of screen0 is permeable. I haven't been able to find any documentation about how to explicitly tell the mouse where to crossover - is there some? It seems that xorg is being really aggressive in guessing where the mouse crossover should take place. Does anyone know how to do this? Thanks!

    Read the article

  • How can I get non-programmer colleagues on board with bespoke software rather than Dynamics CRM + Sharepoint?

    - by Bendos
    I am working with a company which designs and builds one-off machines. They have been 'dabbling' with hosted Dynamics CRM and Sharepoint (on different servers!) in an attempt to centralise their data and help colleagues collaborate more effectively across projects. They haven't used either system to their potential. Now we are looking at the engineering department who already use a form of version control software for the various CAD files (Autodesk Vault) however it is becoming increasingly necessary to implement more of a generic file version control system as they use many more files than can be managed in Vault (sometimes just photos or scans of paper documents), hence why they were looking at using Sharepoint. However... as the 'programmer' of the bunch, I can see several scenarios which don't seem to fit well with the Dynamics + Sharepoint approach; simple reports based on cross-table queries, exporting certain metrics as a spreadsheet, defining project hierarchies and many-many relationships, and as such I have been pushing for an in-house developed 'ECM' / 'ERP' software package (perhaps in .NET or php). Some colleagues seem to attach a greater value to the MS software (perhaps becuase it has a logo!) but don't see that it's just a framework, not a solution. Can anyone provide a good example of when custom software would actually be better than using Dynamics + Sharepoint and how do I relate that to non-technical staff?

    Read the article

  • How do I configure an Intel HD Graphics 4000?

    - by derabbink
    First off, please note that last night I already posted this question to a launchpad mailing list, so this could be considered a cross post. However, I think this is a better place to ask the same question The question: How can I configure my Ubuntu 12.04, with upgraded kernel (3.6), to use the Intel HD Graphics 4000 adapter? (Intel HD 4000 is the standard of 3rd gen Intel Core i7 (Ivy Bridge) graphics adapter) Some output: $ glxinfo name of display: :0 X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 154 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 $ cat /etc/X11/xorg.conf this is probably the farthest from what it should be Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Module" Load "glx" EndSection $ lspci I only listed the line I think are relevant. If you want more info in order to help me, please comment :) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 16:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 16:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Turks HDMI Audio [Radeon HD 6000 Series]

    Read the article

  • Problem with gluOrtho2D()

    - by Shashwat
    I was trying to understand the gluOrtho2D function. I have drawn 4 lines originating from the center reaching up to 4 corners of the screen. You can follow the below code. osize is a variable which is used to set the parameters of gluOrtho2D. It will create a window of size 2*osize. If works fine when osize is 1. Lines reach the corners. But as I increase the value of osize, the length of the lines decreases (cross becomes smaller and does not cover the whole screen). But I think it should reach the corner. void display() { glClear( GL_COLOR_BUFFER_BIT ); //glViewport(0, 0, 100, 100); glMatrixMode (GL_PROJECTION); float osize = 1.2; //glOrtho(-osize*1.0, osize*1.0, osize*1.0, -osize*1.0, -1.0, 1.0); gluOrtho2D(-osize*1.0, osize*1.0, osize*1.0, -osize*1.0); glMatrixMode (GL_MODELVIEW); glBegin(GL_LINES); glColor3f(0.0, 0.0, 1.0); glVertex2f(0.0, 0.0); glVertex2f(-osize*1.0, -osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(-osize*1.0, osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(osize*1.0, -osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(osize*1.0, osize*1.0); glEnd(); glutSwapBuffers(); //includes glFlush(); } What is the problem?

    Read the article

  • Choosing a JavaScript Asynch-Loader

    - by Prisoner ZERO
    I’ve been looking at various asynchronous resource-loaders and I’m not sure which one to use yet. Where I work we have disparate group-efforts whose class-modules may use different versions of jQuery (etc). As such, nested dependencies may differ, as well. I have no control over this, so this means I need to dynamically load resources which may use alternate versions of the same library. As such, here are my requirements: Load JavaScript and CSS resource files asynchronously. Manage dependency-order and nested-dependencies across versions. Detect if a resource is already loaded. Must allow for cross-domain loading (CDN's) (optional) Allow us to unload a resource. I’ve been looking at: Curl RequireJS JavaScriptMVC LABjs I might be able to fake these requirements myself by loading versions into properly-namespaced variables & using an array to track what is already loaded...but (hopefully) someone has already invented this. So my questions are: Which ones do you use? And why? Are there others that my satisfy my requirements fully? Which do you find most eloquent and easiest to work with? And why?

    Read the article

  • Developing an Interface to a Dynamic System

    - by radix07
    I work for a small company and have been designing a GUI to interface our embedded system. The problem with this embedded system is that it is not a finished product (may never be) and is constantly under development and being tweaked and updated for different customers and applications in small volumes. So to deal with this I made a program that can export all the data from a spreadsheet where most of the embedded system variables are sourced from and throw them into a small database for the GUI application to use. This database program I made also spits out a cross reference file for the embedded system which allows the GUI to look up all the variables. This system works pretty well so far, and is even integrated with version control among the GUI, database, and embedded system. The big problem is that there is constant development on several projects that use this system and it gets terribly tedious to keep the system up to date and bring in new changes. This has gotten to the point to where I have had to code the GUI to dynamically (generically) generate all interfaces since I am never guaranteed to find the same data the same way. I have not been able to come up with a good way to uniquely identify the data I import from excel since all fields are able to be changed (due to engineering stubbornness, code re-factoring and/or excel issues) and I cannot assign a fixed reference within the sheet itself. So, are there any good methods or ideas on how to handle the chaos?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >