Search Results

Search found 299 results on 12 pages for 'jeroen bart engelen'.

Page 3/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • .NET Reflector 7.2 Early Access Build 1 Released

    - by Bart Read
    I've just posted up full details of this release on the .NET Reflector blog at http://www.reflector.net/2011/05/life-is-a-journey-not-a-destination-net-reflector-7-2-ea-1-has-been-released/ and, breaking with previous tradition, this includes a fairly extensive changelog. You can download this EA build from the .NET Reflector homepage at http://www.reflector.net/. Enjoy! (And please don't forget to tell us what you think on the forum, http://forums.reflector.net/, using the tags #7.2 and #eap.)...(read more)

    Read the article

  • What is controlling the desktop display?

    - by Bart Silverstrim
    I have two Ubuntu systems and in the course of changing configurations something has become muddled. I have disabled Unity in favor of gnome shell, the older style display of the desktop. Then I installed xfce 4. Seemed everything would be working okay, and for the most part it does. Except I noticed that on one system there's something else controlling settings. On one, if I right click the desktop, I get the menu with the options: open in new window create launcher... create url link... create folder... create from template -> open terminal here paste desktop settings... properties... applications -> On system two, right clicking brings up the menu: Create new folder Create new document -> organize desktop by name keep aligned paste Change Desktop Background Additionally, even though I set the background with the xfce settings manager, on system two that background will appear for a few seconds before it's replaced by something that looks like a background from Ubuntu's original desktop. And it's being controlled by what comes up with the "change desktop background" when right clicking, which isn't the xfce settings manager. On the first system, that right click does bring up the xfce settings tool. In short, something is controlling/overriding the xfce settings on machine two, but I can't find what file or configuration tool is doing it. How can I get system two to behave as system one, giving control of settings and configuration of X to XFCE's tools?

    Read the article

  • ReSharper C# Live Template for Declaring Routed Event

    - by Bart Read
    Here's another WPF ReSharper Live Template for you. This one is for declaring standalone routed events of any type. Again, it's pretty simple:        #region $EVENTNAME$ Routed Event       public static readonly RoutedEvent $EVENTNAME$Event = EventManager.RegisterRoutedEvent(            "$EVENTNAME$",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( $EVENTHANDLERDELEGATE$ ),           typeof( $DECLARINGTYPE$ ) );       public event $EVENTHANDLERDELEGATE$ $EVENTNAME$       {           add { AddHandler( $EVENTNAME$Event, value ); }           remove { RemoveHandler( $EVENTNAME$Event, value ); }       }       protected virtual void On$EVENTNAME$()       {           RaiseEvent( new $EVENTARGSTYPE$( $EVENTNAME$Event, this ) );           $END$       }       #endregion Here are my previous posts along the same lines: ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate ReSharper C# Live Template for Dependency Property and Property Change Routed Event Boilerplate Code Enjoy! Technorati Tags: resharper,live template,c#,routed event,wpf,boilerplate,code generation

    Read the article

  • Looking for mass cropping software

    - by Bart van Heukelom
    I'm looking for a tool than runs on Ubuntu that can let me: Open an image in a folder which has thousands Crop and rotate it Save as a copy, automatically named (not manually), with one click. Preferably with something in the name that I can later use to filter these cropped copies in Nautilus (unless it saves in another directory, that'd be even better). Move to next image and repeat Does it exist?

    Read the article

  • Better drivers for SiS 650/740 integrated video?

    - by Bart van Heukelom
    I installed Xubuntu 10.10 on an old box today and the graphical performance is horrid. According to lspci, the video card is this: 01:00.0 VGA compatible controller: Silicon Integrated Systems [SiS] 65x/M650/740 PCI/AGP VGA Display Adapter (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 8081 Flags: 66MHz, medium devsel, IRQ 11 BIST result: 00 Memory at f0000000 (32-bit, prefetchable) [size=128M] Memory at e7800000 (32-bit, non-prefetchable) [size=128K] I/O ports at d800 [size=128] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel modules: sisfb Is there a way to make it faster? Alternative drivers? The additional drivers tool shows nothing. I'm specifically interested in improving Java's Java2D rendering speed, because I'll be running a "stat screen" written in that language on it.

    Read the article

  • SSL Certificate Works in Monit - But Not in Keystore

    - by Bart Silverstrim
    I have a situation where there's a keystore file with the various root/intermediate certificates stored in it in a way that it seems to work for most browsers. Problem is that when mobile browsers hit it, there's a break in the chain and they complain. I used an SSL checker at http://www.sslshopper.com/ssl-checker.html and it states that "The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate." So...the desktop browsers must have the intermediate certs already and can make the chain connections, I'm assuming, while the mobile browsers can't. The thing is that I had used Portecle to export certificates from the keystore and cobble them together to create a .PEM certificate to run the Monit utility. When I check that application with the SSL checker, it works fine! The person that originally created the keystore said he couldn't follow the SSL provider's directions for creating the keystore because he created the CSR request using openssl, so the cert and private key had to be converted to DER format and use importkey to get it to work; following the directions he found online had importkey seem to use only a set keystore file as a result, and it would erase anything already in the file if it existed. So is there a way to take the certificate I created for Monit and create a working keystore for the Tomcat website? What would be causing the chain to be broken in the current keystore, but work for Monit? I have the SSL cert provider's intermediate and cross certificates, and the website's certificate, but is what else would I need to create a working chain of certs for a keystore?

    Read the article

  • ReSharper C# Live Template for Dependency Property and Property Change Routed Event Boilerplate Code

    - by Bart Read
    I don't know about you but it took me about 5 seconds to get royally fed up of typing the boilerplate code necessary for creating WPF (and Silverlight) dependency properties and, if you want them, their associated property change routed events. Being a ReSharper user, I wondered if there was any live template for doing this. It turns out there's nothing built in, but there are many examples of templates for creating dependency properties out there on the web, such as this excellent one from Roy...(read more)

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • Why can we recognize game engines?

    - by Bart van Heukelom
    About many games you can say "oh that's the Unreal engine for sure", "this was made by upgrading GTA 4", etc. We can often recognize the engine used for a game just by looking at its graphics (disregarding menus and such). I'm wondering, why is this? All game engines use the same 3D rendering technology that we all use, and the different games usually have a distinct art style, so what's left to recognize?

    Read the article

  • How to move Ubuntu to an SSD

    - by Bart van Heukelom
    My current situation is: One hard disk Dual boot Ubuntu 11.04 and Windows 7. Partitions: 100MB Windows System thingy 144GB Main Windows 160GB Ubuntu 4GB Swap 12GB System Restore stuff Now I want to install an 80GB SSD and move Ubuntu to it. AFAIK I need to: Shrink the 160GB Ubuntu partition to 80GB Copy it over to the SSD Change fstab to mount the SSD as / How do I do the second? And what do I need to do about Grub?

    Read the article

  • Getting Wacom Bamboo Pen + Touch pressure sensitivy in GIMP

    - by Bart van Heukelom
    I've installed my Wacom Bamboo Pen + Touch (CTH-460) in Ubuntu (at least on one system, not another) and the pen works well for controlling the mouse cursor. However, I can't get pressure sensitivity to work in GIMP. I have 4 extra devices in the input devices settings screen now, two of which are the pen and eraser. I've set them both from Disabled to Screen, and left the default settings intact. However, after saving I still don't see any pressure options in the brush tool's options. I've also tried setting the mode to Window instead, but it makes no difference regarding pressure sensitivity. There are no other modes. The pressure works out-of-the-box in Blender (grease pen) so it must be something in GIMP. What can be wrong? Why don't the options appear? How can I debug this?

    Read the article

  • WPF: How to get the bounds of a control in an automatic layout container, in the container coordinate space

    - by Bart Read
    Googling this the other day, I started to get the impression that this might be annoyingly tricky. You might wonder why this is necessary at all, given that WPF implements layout for you in most cases (except for containers such as Canvas), but trust me, if you're developing custom elements, at some point you're probably going to need it. If you're adding controls to a Canvas you can always use the attached Left, Right, etc., properties to get the bounds. Fortunately it's really no more difficult...(read more)

    Read the article

  • Smartassembly 5: it lives! Early Access builds now available

    - by Bart Read
    I'm pleased to announce that, late last week, we put out the first early access build for Smartassembly 5, Red Gate's fantastic code protection and error reporting tool, which we acquired last September. You can download it via: http://www.red-gate.com/messageboard/viewforum.php?f=116 It's obviously pretty early days, so please do not try to use this to protect a production application, but we've already done a lot of work in some key areas: We're simplifying and streamlining the licensing model (you won't see this yet, but a lot of the work on this has already been done). We've improved usability of the product, with a better menu, reordering of project settings, and better defaults. We've also fixed a load of bugs, which I'll let Alex blog about in more detail. On a slightly more trivial level, the curly braces are also no more. Over the coming weeks, we'll be adding more improvements, and starting usability tests. If you're interested in getting involved in the latter, please drop an email to [email protected].

    Read the article

  • Permanently mounting Windows' NTFS partition, fully enabled

    - by Bart van Heukelom
    I'm transforming a Windows 7 PC into a dual boot system with Ubuntu 10.10. Following other questions on this site, I've mounted my Windows drive by adding this to fstab UUID=blabla /windows ntfs users,defaults,umask=000 0 0 It appears to work well, I can read and write, but it appears to be a bit crippled still. When I tried to update an SVN working copy with RabbitVCS, it complained that it couldn't write to a temporary file inside the working copy, even though the permissions are all on 0777 inside /windows (by default, I haven't done that manually). It even corrupted that working copy :( It works when I use the command line SVN client with sudo, but that's hardly user friendly.

    Read the article

  • Protect and Improve your Software with SmartAssembly 5

    - by Bart Read
    SmartAssembly 5 has been released. You can download a 14-day fully-functional free trial from: http://www.red-gate.com/products/smartassembly/index.htm This is the first major release since Red Gate acquired the tool last year, and our focus has mainly been on improving the quality of an already great tool. We've also simplified the licensing model so that there are now only three editions: Standard - bullet-proof protection at a bargain price, Pro - includes the SDK & custom web server...(read more)

    Read the article

  • ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate

    - by Bart Read
    Following on from my previous post, where I shared a Live Template for quickly declaring a normal read-write dependency property and its associated property change event boilerplate, here's an unsurprisingly similar template for creating a read-only dependency property.        #region $PROPNAME$ Read-Only Property and Property Change Routed Event        private static readonly DependencyPropertyKey $PROPNAME$PropertyKey =                                             DependencyProperty.RegisterReadOnly(             "$PROPNAME$", typeof ( $PROPTYPE$ ), typeof ( $DECLARING_TYPE$ ),             new PropertyMetadata( $DEF_VALUE$ , On$PROPNAME$Changed ) );       public static readonly DependencyProperty $PROPNAME$Property =                                           $PROPNAME$PropertyKey.DependencyProperty;        public $PROPTYPE$ $PROPNAME$         {             get { return ( $PROPTYPE$ ) GetValue( $PROPNAME$Property ); }             private set { SetValue( $PROPNAME$PropertyKey, value ); }         }       public static readonly RoutedEvent $PROPNAME$ChangedEvent   =                                           EventManager.RegisterRoutedEvent(           "$PROPNAME$Changed",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( RoutedPropertyChangedEventHandler< $PROPTYPE$ > ),           typeof( $DECLARING_TYPE$ ) );       public event RoutedPropertyChangedEventHandler< $PROPTYPE$ > $PROPNAME$Changed       {           add { AddHandler( $PROPNAME$ChangedEvent, value ); }           remove { RemoveHandler( $PROPNAME$ChangedEvent, value ); }       }        private static void On$PROPNAME$Changed(           DependencyObject d, DependencyPropertyChangedEventArgs e)         {             var $DECLARING_TYPE_var$ = d as $DECLARING_TYPE$;            var args = new RoutedPropertyChangedEventArgs< $PROPTYPE$ >(               ( $PROPTYPE$ ) e.OldValue,               ( $PROPTYPE$ ) e.NewValue );           args.RoutedEvent    = $DECLARING_TYPE$.$PROPNAME$ChangedEvent;           $DECLARING_TYPE_var$.RaiseEvent( args );$END$        }        #endregion The only real difference here is the addition of the DependencyPropertyKey, which allows your implementation to set the value of the dependency property without exposing the setter code to consumers of your type. You'll probably find that you create read-only dependency properties much less often than read-write properties, but this should still save you some typing when you do need to do so. Technorati Tags: resharper,live template,c#,dependency property,read-only,routed events,property change,boilerplate,wpf

    Read the article

  • Reverse proxying only a specific URL

    - by Bart Silverstrim
    I have a web server at www.ourcompany.com running Apache2. Using the proxy modules, I am able to (for example) get 172.16.0.5, an internal IP device, to be accessed on www.ourcompany.com/device. The trouble is that anyone can play with or explore the device using strings sent to www.ourcompany.com/device/change/settings/here.html. I'd like the reverse proxy to only work for a specific URL; www.ourcompany.com/device/you/must/use/this while anything else will be rejected if requested. Is there a setting that can be used to do this, or is it a simple rewrite condition placed in the virtualhost for the site under sites-enabled? What is the simplest, most maintainable way to sanitize requests to the internal device through the reverse proxy? Running Apache2 on Ubuntu.

    Read the article

  • Must double-tap Windows key to open Dash

    - by Bart van Heukelom
    I'm experiencing some strange behaviour of the Unity Dash and the Windows/Super keyboard key. As far as I know, normal behaviour is: Tap: Open Dash Hold: Show keyboard shortcut overlay However, the behaviour I'm experiencing is: Tap: Show keyboard shortcut overlay (after a short delay) Double Tap: Open Dash Hold: Show keyboard shortcut overlay What could cause this, and how do I fix it? I'm on a fresh 12.10 (Quantal) installation.

    Read the article

  • Extension GLX missing...on a desktop PC

    - by Bart van Heukelom
    I just installed Ubuntu 12.10 on a new PC with an Nvidia GTX 560 graphics card, but after installing the Nvidia proprietary drivers (either -current or -current-updates), Unity won't start. When trying to start it manually I get the message "extension GLX missing". I've searched around and found results like this question which point out it's a problem with Nvidia Optimus laptops. However, I don't have this problem on a laptop, but on a desktop PC. lshw output for the graphics card: *-display description: VGA compatible controller product: GF114 [GeForce GTX 560 SE] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:f4000000-f5ffffff memory:e0000000-e7ffffff memory:e8000000-ebffffff ioport:e000(size=128) memory:f6000000-f607ffff and CPU: *-cpu description: CPU product: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz vendor: Intel Corp. physical id: 40 bus info: cpu@0 version: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz slot: SOCKET 0 size: 1600MHz capacity: 3800MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms cpufreq configuration: cores=4 enabledcores=4 threads=4

    Read the article

  • Expiring timed actions a good idea?

    - by Bart van Heukelom
    We have an online game where players sometimes have to wait a while (say 30 minutes) before a process they intiated completes. This encourages them to come back later. An example of this is growing crops in Farmville or basically any action in the Sims Play4Free. Now, however, there is the idea to let these processes expire, so if the player doesn't 'reap' them in time (e.g. within 4 hours) they are aborted. I'm a bit sceptical about this. How will this make players come back more often? Is not the reward of reaping the process enough for that? Can we expect players to fit their daily schedule around our game, maybe even set the alarm clock at night? Won't this just cause players to give up on starting these processes in the first place? I realise this may be too subjective for this site, so I'll end with a concrete question: Do (m)any other online free-to-play games employ this technique?

    Read the article

  • System resets to login screen spontaneously

    - by Bart van Heukelom
    Now I'm experiencing spontaneous resets of the desktop environment (when logged in and working normally), where the screen will go black, then reset to the login screen. The resets appear to be random, though there is a chance it occurs more when using Nautilus. The screen goes black instantly, e.g. there is nothing like a disappearing launcher or window borders. Where can I begin finding the cause? (Note: I've recently upgraded from 11.04 to 11.10, then 12.04.)

    Read the article

  • Atheros AR5BWB222 Wireless Intermittent Connectivity

    - by Bart
    So I recently installed Ubuntu on my Acer Aspire V3-551 laptop. I have an Atheros AR5BWB222 wireless adapter. Everything works fine except for the wireless. I can sometimes connect to the wireless, but most of the time it will be making an attempt to connect and then enver connect. Or it will connect, but it will only stay connected for about 10 seconds before getting disconnected from the wireless. All the other drivers updated through System SettingsAdditional Drivers are fine, even the Ethernet. Its just the problem with the wireless. I've tried a power managment setting, tried looking for additional updates, but nothing fixes my problem. Is there any solution for my particular wireless card?

    Read the article

  • Easy road from DisplayObject to Molehill?

    - by Bart van Heukelom
    I have a finished Flash game which is rendered using the built-in display tree, i.e. Bitmaps contained in Sprites (and a text here and there, few vector graphics, and one bitmap-filled shape). For extra performance, I'd like it to use Molehill for rendering, but that's not possible out of the box. What's the easiest way to make this game use Molehill when available, but fall back to the current method if it's not available?

    Read the article

  • Rendering only a part of the screen in high detail

    - by Bart van Heukelom
    If graphics are rendered for a large viewing angle (e.g. a very large TV or a VR headset), the viewer can't actually focus on the entire image, just a part of it. (Actually, this is the case for regular sized screens as well.) Combined with a way to track the viewer's eyes, you could theoretically exploit this and render the graphics away from the viewer's focus with progressively less details and resolution, gaining performance, without losing perceived quality. Are there any techniques for this available or under development today?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >