Search Results

Search found 1090 results on 44 pages for 'simon walker'.

Page 10/44 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • 2D management game [on hold]

    - by Simon Bull
    Very newbie question but I have a game idea in mind. It will be 2d and data centric, like football manager. However I am struggling to find a platform that would suit. I am an experienced line of business developer so am happy to write code, but I would like a platform that does some of the leg work for me so was avoiding OpenGL. I would also like to be able deploy to iOS, android, windows and OS X. What are the options? To be more clear, the game is not a normal platform or shooter type game, so game maker is likely to be way too basic and unity seems a little over the top (though I am not sure if the GUI options would fit?). The majority of the game is more like business screens just displaying data and having buttons to click. Are there options for this type of game (May help to look at football manager)?

    Read the article

  • High resolution graphical representation of the Earth's surface

    - by Simon
    I've got a library, which I inherited, which presents a zoomable representation of the Earth. It's a Mercator projection and is constructed from triangles, the properties of which are stored in binary files. The surface is built up, for any given view port, by drawing these triangles in an overlapping fashion to produce the image. The definition of each triangle is the lat/long of the vertices. It looks OK at low values of zoom but looks progressively more ragged as the user zooms in. The view ports are primarily referenced though a rectangle of lat/long co-ordinates. I'd like to replace it with a better quality approach. The problem is, I don't know where to begin researching the options as I am not familiar either with the projections needed nor the graphics techniques used to render them. For example, I imagine that I could acquire high resolution images, say Mercator projections although I'm open to anything, break them into tiles and somehow wrap them onto a graphical representation of a sphere. I'm not asking for "how do I", more where should I begin to understand what might be involved and the techniques I will need to learn. I am most grateful for any "Earth rendering 101" pointers folks might have.

    Read the article

  • Best way to cache apt downloads on a LAN?

    - by Ken Simon
    I have multiple Ubuntu machines at home and a pretty slow internet connection, and sometimes multiple machines need to be updated at once (especially during new Ubuntu releases.) Is there a way where only one of my machines needs to download the packages, and the other machines can use the first machine to get the debs? Does it involve setting up my own local mirror? Or a proxy server? Or can it be made simpler?

    Read the article

  • How to use data mining principles in this project?

    - by Simon
    I'm getting a Data Mining class this semester and we are free for the final project. For a few months I'm working on procedural planets rendering (something like this http://www.youtube.com/watch?v=rL8zDgTlXso). Do you have any idea of which data mining principles I could use to keep working this project ? Maybe I could try to generate interesting terrains from a set of real maps ? Any publications on that subject ? Any other ideas ?

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • Searching for an online shop accessible via API

    - by Simon A. Eugster
    I need an online shop with a custom interface (customizing items with Ajax, with a preview included). Writing it myself does not make too much sense (implementing all the payment options etc.), so I would like to use an existing online shop (OpenSource). I would like to build my own UI which, for example, tells the shop to add an item to its cart -- i.e. without using the online shop's native UI. More precisely, it should be an online gallery where the user can directly order an image if he likes it. The final checkout/payment page can be native again. Is there a shop system that supports this? Or is it still faster to write it on my own? Or are there better options?

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • SEO on an existing platform

    - by Simon
    I'm given the task to increase user visits and conversions on for a recruitment website. Conversions would be interested job seekers submitting their CV. The manager would first like to increase the organic search results and optimize the website before starting with targeted campaigns. The problem is, they are using a proprietary recruitment software platform which I can barely add changes to. For example, the url's all look like dynamic url's without any semantic meaning and the markup is almost completely build automatically by that platform. I'm also confident that the lack of submitted CV's is due to a bad user experience of the website (no incentives or clear CTA to register) Besides optimizing the static texts and page titles, is there anything I can do? Thanks

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

  • Easter eggs as IP protection in software

    - by Simon
    I work in embedded software, and for some reason, management wants to hide an Easter egg as means of IP protection. They call it a watermark, and since our software interact with the video preview feed (the image displayed on a screen before you take a photo), they want me to implement a trigger which will react to some unusual video input (a video konami code like dark - bright - dark - bright - whatever). When this trigger fires, something strange happens (which is outside of the normal behavior of the software). The goal is to check whether our software is included in a device. Does it sound like a good idea? I have many argument against this move: What if the konami code is too sensitive and user triggers it? Does this kind of watermark have any legal value? What if this "feature" is discovered by the client? The performance penalty should be very small, since the soft run on small devices. I am the one developping this trigger. If things go wrong, what is my responsibility? What is your opinion about this method? I can't find a link, but I remember seeing an answer on this site suggesting that putting Easter eggs for protection purpose was a good idea. Has anyone tried it with good results?

    Read the article

  • Weird system freeze. Nothing works keyboard/mouse/reset button - Ubuntu 12.04 64bits [closed]

    - by Simon
    I have fresh PC: i5 3570K with Intel 4000 onboard ASrock Z77 Extreme4-m 8GG RAM - Adata 1333Mhz... 1TB Seagate drive 7200rpm I've also fresh systems - Ubuntu/Win7 and today there was something strange. Ubuntu twice just freezed. Everything stopped, even keyboard and mouse wasn't responding. Even RESET button didn't work. Right now memtest is running, but I'd like to know, where else can i look for cause. Can it be software fault if even reset isn't working? Only long reset pressing rebooted PC... I'm a bit confused. Or should I test components - CPU, motherboard, disk. Which logs in Ubuntu should I check to diagnose cause? EHm I had few adventures with this PC already. Shipped motherboard was broken (ASrock Z68 Extreme3) and had old bios, so I had to contact with reseller, replace it and at the end decided on Z77, but everything took 3 weeks, so I have bad feelings... Edit: Both freezes were during editing something in gedit (it can be coincidence) and after few updates today - when memtest is end I'll check what was updated

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • Compiling on the desktop!! no?

    - by simon
    so I have compile my first program today, with the help of the "askubuntu's members"..... thanks so much!!! ;) this is what I have compiled : https://github.com/treeder/logitech_unifier But now, I have some question: 1- I have compiled my file on the desktop I have though it was easier first, but I never though it would create a file on my desktop...... so what do you guys do with the file created by the compilation? I don't think I need it anymore.... so do I delete it? or do I keep it? Is there a folder I should specificaly use for compiling? thanks for answering those newbies question.

    Read the article

  • Getting graduates up to speed?

    - by Simon
    This question got me thinking about how comapnies deal with newly-hired graduated. Do experienced programmers expect CS graduates to write clean code (by clean I mean code easily understandable by others — maybe that is too much to expect?) Or do significant portion of graduates at your place (if any) just end up testing and fixing small bugs on existing applications? And, even if they do bug fixes, do you end up spending double the amount of time just checking they did not end up breaking anything and creating new bugs? How do you deal with such scenarios when pair programming and code reviews are not available options (for reasons such as personal deadlines), and also what techniques did you find to get fresh graduate up to speed? Some suggestions would be great.

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Best Resources for learning SQL? [closed]

    - by Simon
    Possible Duplicate: Good Books and videos for absolute beginner to SQL I have landed a role as a product engineer for a web based product. A big part of the product is allowing its users the ability to create queries with SQL to pull in business information from their back end databases. I know the very basics of SQL and need to spend some time getting a better grasp on SQL. I have the tutorial from w3schools on my ToDo list, but was hoping to get some answers that point me to good resources for learning SQL. I have no preference - I can buy a book (SQL For Dummies?), or online resources, online videos, audio, etc.

    Read the article

  • How should I track multi-valued page attributes (e.g. tags) using custom variables?

    - by Simon
    Our pages can each have many tags, e.g 'football', 'sms', 'nsfw', etc.. wich we would like to track in google analytics. We're already tracking things like category using google analytics custom variables. We've used three of the five available slots so far. How can we track tags the same way? If we just mush them all together - e.g. 'football, sms, nsfw' then can we track the ones that are tagged 'football'? What's the right way to track multi-valued page attributes using custom variables?

    Read the article

  • How well does Intel 3000 HD work on Ubuntu?

    - by Simon
    Right now i have notebook with Nvidia 8400M GS (I know, it's not good card) and it's impossible to work normally when i'll plugin external monitor (1920x1080). Windows 7 can deal with it without problems (1440x900 on notebook + 1920x1080 external). On Ubuntu i have to choose one screen and turn off the second one. Even with only one screen Ubuntu (Unity or even Gnome3) sometimes hangs for a while, I've not found solution for this yet, but nevermind, it's probably because of my card or/and nvidia's drivers. I'm going to buy new PC, but for now only with integrated Intel 3000HD, and my question is: Should i expect similar problems with this card? Here i've found link to Intel's webpage about drivers - "only community develop them", and i'm a bit concerned. I'll use then only one monitor (the bigger one), but how well does those driver work? Are there any performance tests?

    Read the article

  • Simpler Times

    - by Simon Moon
    Does anyone else out there long for the simpler days where you needed to move a jumper in the jumper block to set your modem card to use IRQ7 so it would not conflict with the interrupts used by other boards in your PC and your modem card came with a 78 page manual telling you everything you would need to know to write your own driver for the board including a full schematic along with the board layout showing every chip, capacitor, and resistor?  Ahhhhh, the simplicity!I am wrestling with UserPnp issues for a USB software licensing dongle that is needed by some third party software in one of our production applications. Of course, every machine in production is virtual, so it could be anything in the chain of the software application library to the device driver running on the VM to the configuration of the simulated USB port to the implementation of the USB connection and transport in the virtual host to the physical electrical connections in the USB port on the hypervisor.If only there were the virtual analog to a set of needle-nose pliers to move a virtual jumper.Come to think of it, I always used to drop those damn things such that they would land in an irretrievable position under the motherboard anyway.

    Read the article

  • How can I open binary image files? (.img)

    - by Simon Cahill
    I'm a Windows/Mac/Ubuntu and Androoid user, so I know what I'm talking about, when I say: How do I open binary image files? (.img) They just won't open, on any OS... I'm an Android dev... I'm currently working on a ROM, (I also program, using Windows) but I need to extract files, from .img files. I've converted them to .ext4.img but they just aren't recognized by Linux (Definitly not by Android), by Mac OS or Windows. In other words, I can't open, extract or mount them. Can anyone help me? I'm kinda confused...

    Read the article

  • I think "/lib/modules/$(uname -r)/build" points to incorrect folder

    - by Simón
    I compile/create my own deb packages of kernel with: make-kpkg --rootcmd fakeroot --initrd --append-to-version=$version --revision=1 kernel_image kernel_headers But when I install both packages, in /lib/modules/(*name_kernel_compiled*) it creates two links, sources and build, pointing to folder with sources, from I've compiled. sources link is correct but build should point to /usr/src/linux-(version kernel), don't you think?

    Read the article

  • What method do I use to manage an app-specific background process?

    - by Simon Dubois
    I am developing an application with different behavior depending on the arguments : "-config" starts a Gtk window to change options, start and close the daemon. "-daemon" starts a background process that does something every X minutes. I already know how to use fork/system/exec etc... But I would like to know the main logic of such application to : restart or refresh the daemon when configuration change. keep only one instance of the daemon. I have read that killing the daemon to restart it is not a clean way to do. How other applications do ? (ubuntuone, weather forecast, rss feed working with notification area) Thanks for your help. PS : I don't want to create a system-wide daemon, just a user application with a background process.

    Read the article

  • Can I show a table of one custom variable against another?

    - by Simon
    We have a number of custom variables set up in google analytics. We'd like to show a table of event occurrences broken down by two custom variables, e.g. if variable one can be A, B, or C and variable two can be J, K or L: Events | A | B | C | -------+-----+-----+-----+ J | 345 | 65 | 12 | K | 234 | 43 | 7 | L | 123 | 21 | 4 | -------+-----+-----+-----+ How do I get the information in that format?

    Read the article

  • Use of list-unsubscribe to improve inbox delivery

    - by Jeffrey Simon
    To overcome email being classified as spam by Gmail, Google recommends a number of steps, which we have implemented (namely SPF, DKIM, and Precedence: bulk). One additional measure they recommend at https://support.google.com/mail/bin/answer.py?hl=en&answer=81126#authentication reads as follows: Because Gmail can help users automatically unsubscribe from your email, we strongly recommend the following: Provide a 'List-Unsubscribe' header which points to an email address where the user can unsubscribe easily from future mailings (Note: This is not a substitute method for unsubscribing). Documentation for List-Unsubscribe is found at http://www.list-unsubscribe.com/. From this documentation I expect a button to be provided by a supported mail client. I have tested the 'List-Unsubscribe' header and it does not appear to provide the button. I have tested in both Gmail and OS X Mail. I tested with an http address and with both an email address and an http address. The format of the header is as follows: List-Unsubscribe: <mailto:[email protected]>, <http://domain.com/member/unsubscribe/[email protected]?id=12345N> No button appears in any test. My questions: How widely is List-Unsubscribe supported? Should a button be appearing somewhere, or does something else have to be present? I have seen a comment that even if the button is not present, services like Gmail, Yahoo, Hotmail/Windows Live would give higher regard to email having the header. Thus it might be worthwhile for this aspect alone. Please note that our standard email footer already contacts instructions and a link to allow unsubscribing from our email. Finally, is it worth while to implement this header? (That is, any downsides?)

    Read the article

  • Why do programming languages allow shadowing/hiding of variables and functions?

    - by Simon
    Many of the most popular programming languges (such as C++, Java, Python etc.) have the concept of hiding / shadowing of variables or functions. When I've encountered hiding or shadowing they have been the cause of hard to find bugs and I've never seen a case where I found it necessary to use these features of the languages. To me it would seem better to disallow hiding and shadowing. Does anybody know of a good use of these concepts?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >