Search Results

Search found 12144 results on 486 pages for 'old ixfoxleigh'.

Page 272/486 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • Five Holiday Gaming Tips for an Active Game Table

    - by Jason Fitzpatrick
    Getting together for the holidays represents a great oppurtunity to introduce new players to the fun of tabletop gaming. Make sure to introduce them right with these five handy tips. Courtesy of GeekDad, we find five tips for introducing new players to the fun of tabletop games old and new over the holidays. Tip number one: 1. Start short. Not everyone is ready for a multi-hour game session right after a big holiday dinner. Post-prandial drowsiness doesn’t go well with a game that takes twenty minutes to set up and another fifteen to explain, so don’t lose your audience before you get to the good stuff. Pick something speedy that gets people into the game with little downtime. If possible, get them laughing — I hear it causes the release of endorphins, which makes them feel better, which will lead to more gaming. (We’ll work on the dopamine receptors later, when you get them hooked on learning new games.) Games like Zombie Dice and Spot It! are easy to teach and can handle a pile of players. FlowerFall and Ca$h ‘n’ Gun$ are guaranteed to make people gravitate to the game table to see what’s going on. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Hardware settings reviewing

    - by dino99
    Get some hardware related errors logged into dmesg: oem@dub:~$ dmesg | grep ata10 [ 1.007989] ata10: PATA max UDMA/133 cmd 0xa800 ctl 0xa480 bmdma 0xa408 irq 18 [ 1.691664] ata10: prereset failed (errno=-19) [ 1.691670] ata10: reset failed, giving up oem@dub:~$ dmesg | grep ata2 [ 0.990290] ata2: SATA max UDMA/133 abar m1024@0xfebfb800 port 0xfebfb980 irq 45 [ 1.688011] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.688055] ata2.00: unsupported device, disabling [ 1.688057] ata2.00: disabled As I understand, its related to my old Seagate SATA HDD, and the PATA CDROM. These errors are quite new, so I feel that their settings (dma, write-cache, ...) have been modified by some upgrades. I've already used hdparm to set write-cache off on the HDDs. But it seems like I need to review some other setting(s) too. With oldest distro it was easy to know about the hardware settings, but now on Quantal/Precise its deeply hidden for the average user. So i would like to know how to view/modify these settings. About the CDROM reader, the problem is different: - the system don't identify it with an UUID; but only with ATAPI or by-id oem@dub:~$ dmesg|grep 'ATAPI' [ 1.308611] ata3.00: ATAPI: TSSTcorp CDDVDW SH-S203D, SB00, max UDMA/100 oem@dub:~$ ls -l /dev/disk/by-id/ ...... lrwxrwxrwx 1 root root 9 oct. 1 06:42 ata-TSSTcorp_CDDVDW_SH-S203D -> ../../sr0 .....

    Read the article

  • Missing Wireless access point

    - by nvcnvn
    I have an: Ubuntu 12.04 with proprietary Boardcom STA Wireless driver installed BCM43224 802.11a/b/g/n wireless card TP-LINK TD-W8101G 54Mbps Wireless ADSL2+ Modem Router Yesterday before I went to sleep, I still connect to my wireless access point, this morning when I start my laptop I don't see it in the list - there are some neighbors listed but not mine. The WLAN is green, with my old Nokia E72, I still see my access point with 100% signal. I have tried to restart my laptop and turn the firmware off/on by the switch but no help. I have read the WirelessTroubleShootingGuide but I can do anything. Here is some of my card info: *-network description: Wireless interface product: BCM43224 802.11a/b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: c0:cb:38:06:5d:53 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:17 memory:f0500000-f0503fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 03 serial: f0:4d:a2:42:ab:e4 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168d-1.fw ip=192.168.1.3 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:42 ioport:5000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff memory:f0420000-f043ffff

    Read the article

  • intel atom getting too hot

    - by user59565
    I own an Asus 1215N which is getting very hot Intel Atom 525D dual core ION 2 (geforce 220 + GMA 3150) 4 GB RAM Ubuntu 12.04 it hits 86 C at idle. Some times (at load or turned on 1 hour) it shuts down due to heat, in Windows 7 it runs idle at 49 C. I tried an acpi call to shut down the nVidia chip which is cooled together with the Atom chip. That didn't solve the problem. To check up to see if it really turned off I checked how much power the laptop consumed, it only went from using 1400 mW to 860 mW, no changes in heat. I also tried reapply the standard heat adhesive, the old heat adhesive made it run at 97 (it couldn't even put up a useful install of Ubuntu). This really annoys me, as Ubuntu is the OS of choice to me. Should I try compile the kernel? Is it true that compile for a P4 is the better choice to the atom, when compiling the kernel for this processor architecture? Now I tried compiling the kernel for atom. Now temperature is 83 C (think the drop has more to do with ambient temp than the customized kernel) help appreciated

    Read the article

  • From physics to Java programmer?

    - by inovaovao
    I'm a physics phd with little actual programming experience. I've always liked programming and played around with Basic and Pascal (also VB and Delphi) as a teen, but the largest actual project I completed was an assignement for the introductory computer science class in university where I wrote a nice little program (about 1500 lines of pascal) to display functions of 2 variables in 3D. I've had also a couple other projects of a few hundred lines range, but during my phd I didn't have (or take) the time to program more (string theory is hard guys!), beside playing around with ruby. Now I've decided that I'm more interested in programming than in physics and started to learn Java (hoping to pass the certification exam next week) and OO design. Still, I have trouble deciding on what to focus next (Java EE? Web development? algorithms and C programming?) in order to maximize my employement chances. Bear in mind that I'm aiming (mostly) at the swedish job market and that I'm 30 years old. So for the questions: Do you think that I have any chances to start and make a career in IT and programming coming from physics? What would be the best strategy to maximize my value in the field? Do you have suggestions as to where my physics background might be useful?

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • How to interpret number of URL errors in Google webmaster tools

    - by user359650
    Recently Google has made some changes to Webmaster tools which are explained below: http://googlewebmastercentral.blogspot.com/2012/03/crawl-errors-next-generation.html One thing I could not find out is how to interpret the number of errors over time. At the end of February we've recently migrated our website and didn't implement redirect rules for some pages (quite a few actually). Here is what we're getting from the Crawl errors: What I don't know is if the number of errors is cumulative over time or not (i.e. if Google bots crawl your website on 2 different days and find 1 separate issue on each day, whether they will report 1 error for each day, or 1 for the 1st, and 2 for the 2nd). Based on the Crawl stats we can see that the number of requests made by Google bots doesn't increase: Therefore I believe the number of errors reported is cumulative and that an error detected on 1 day is taken into account and reported on the subsequent days until the underlying problem is fixed and the page it's crawled again (or if you manually Mark as fixed the error) because if you don't make more requests to a website, there is no way you can check new pages and old pages at the same time. Q: Am I interpreting the number of errors correctly?

    Read the article

  • How many of you *really* surf around without JavaScript enabled? [closed]

    - by Stephen
    I've decided to rephrase the question. After some deliberation on Meta, I've realized that my question needs to be a bit more focused. The question: Should we (web developers) continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. OTOH, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? I mean, like really, not figuratively or presumptuously.

    Read the article

  • links for 2010-06-16

    - by Bob Rhubart
    Automating Enterprise Reporting with SOA and Oracle Business Intelligence Publisher In the latest article in the Enterprise Solution Cookbook series, authors John Chung and Harish Gaur take you step-by-step through the development of an automated reporting platform using Oracle's SOA Suite, WebCenter, and Business Intelligence Publisher. (tags: soa enterprise2.0 architect entarch bpm oracle otn) @ORACLENERD: Job: Infrastructure Technical Architect Oracle ACE Chet "ORACLENERD" Justice shares the 411 on a great new gig for the right architect.  (tags: jobs employment infrastructure architect oracleace) Andrew Ness: Building a training environment for RAC, ASM and Dataguard on OEL 5.4 "In all the environments I've worked in where Oracle DBAs are involved, " says Ness, "they would have chewed my arm off to have this level of control over where their data lives." (tags: oracle grid database dba) Chris Quenelle: Virtualization terms UNIXy Goodness blogger Chris Quenelle dives into Wikipedia to compile this short but valuable glossary of virtualization terms.  (tags: solaris hypervisor virtualization) William Vambenepe: CMDB in the Cloud: not your father's CMDB "Most [customers] will be dealing with a mix of old-style and Cloud applications and they’ll be looking for a unified management approach. This helps CMDB incumbents. If you doubt the power to continuity, take a minute to realize that the entire value proposition of hypervisor-style virtualization is centered around it." -- William Vambenepe (tags: oracle otn cloud virtualization) Merv Adrian: Oracle Exadata: a Data Management Tipping Point "In this second version of its newest platform, Oracle not only provides the latest technology in each part of the data-management architecture, but also integrates them under the full control of one vendor, with a unified approach to leveraging the full stack." -- Merv Adrian (tags: oracle exadata database)

    Read the article

  • Certifications in the new Certify

    - by richard.miller
    The most up-to-date certifications are now available in Certify - New Additions Feb 2011! What's not yet available can still be found in Classic Certify. We think that the new search will save you a ton of time and energy, so try it out and let us know. NOTE: Not all cert information is in the new system. If you type in a product name and do not find it, send us feedback so we can find the team to add it!. Also, we have been listening to every feedback message coming in. We have plans to make some improvements based on your feedback AND add the missing data. Thanks for your help! Japanese ??? Note: Oracle Fusion Middleware certifications are available via oracle.com: Fusion Middleware Certifications. Certifications viewable in the new Certify Search Oracle Database Oracle Database Options Oracle Database Clients (they apply to both 32-bit and 64-bit) Oracle Enterprise Manager Oracle Beehive Oracle Collaboration Suite Oracle E-Business Suite, Now with Release 11i & 12! Oracle Siebel Customer Relationship Management (CRM) Oracle Governance, Risk, and Compliance Management Oracle Financial Services Oracle Healthcare Oracle Life Sciences Oracle Enterprise Taxation Management Oracle Retail Oracle Utilities Oracle Cross Applications Oracle Primavera Oracle Agile Oracle Transportation Management (G-L) Oracle Value Chain Planning Oracle JD Edwards EnterpriseOne (NEW! Jan 2011) 8.9+ and SP23+ Oracle JD Edwards World (A7.3, A8.1, A9.1, and A9.2) Certifications viewable in Classic Certify Classic certify is the "old" user interface. Clicking the "Classic Certify" link from Certifications QuickLinks will take you there. Enterprise PeopleTools Release 8.49, Release 8.50, and Release 8.51 Other Resources See the Tips and Tricks for the new Certify. Watch the 4 minute introduction to the new certify. Or how to get the most out of certify with a advanced searching and features demo with the new certify. =0)document.write(unescape('%3C')+'\!-'+'-') //--

    Read the article

  • How to check Early Z efficiency on AMD GPU with Windows 7

    - by Suma
    I have a game using DirectX 9, and a development station using Win 7 x64. I am still able to get access to another station with Vista x64 / dual booted with WinXP x86. I wanted to check early Z efficiency in the game and to my sadness all tools I have tried seem to be unable to perform this task: AMD PerfStudio AMD GPUPerfStudio 2 does not support DirectX 9 at all AMD GPUPerfStudio 1.2 does not install correctly on Windows 7. When I have tweaked the MSI package (a simple OS version check adjustment was needed), it complained the drivers I have do not provide needed instrumentation. The drivers old enough to support the GPUPerfStudio would most likely not be able to operate with my Radeon 5750 card (though this is something I am not 100 % sure, I did not attempt to try any older drivers, not knowing which I should look for) PIX PIX does not seem to contain any counters like this. It offers some ATI specific counters, but when I try to activate them, the PIX reports "PIX encountered a problem while attaching to the target program." I do not want to upgrade to DX 10/11 just to be able to profile the game, but it seems without the step I am somewhat locked with a toolset which is no longer supported. I see only one obvious options which would probably work, and that is using WinXP (or with a little bit of luck even Vista) station, perhaps with some older AMD card, to make sure GPUPerfStudio 1.2 works. Other than that, can anyone recommend other options how to check GPU HW counters (HiZ / EarlyZ in particular, but if others would be enabled as well, it would be a nice bonus) for a DirectX 9 game on Windows 7, preferably on AMD GPU? (If that is not possible, I would definitely prefer switching GPU to switching the OS, but before I do so I would like to know if I will not hit the same problem with nVidia again)

    Read the article

  • Gmail Doesn't Like My Cron

    - by Robery Stackhouse
    You might wonder what *nix administration has to do with maker culture. Plenty, if *nix is your automation platform of choice. I am using Ubuntu, Exim, and Ruby as the supporting cast of characters for my reminder service. Being able to send yourself an email with some data or a link at a pre-selected time is a pretty handy thing to be able to do indeed. Works great for jogging my less-than-great memory. http://www.linuxsa.org.au/tips/time.html http://forum.slicehost.com/comments.php?DiscussionID=402&page=1 http://articles.slicehost.com/2007/10/24/creating-a-reverse-dns-record http://forum.slicehost.com/comments.php?DiscussionID=1900 http://www.cyberciti.biz/faq/how-to-test-or-check-reverse-dns/ After going on a huge round the world wild goose chase, I finally was told by someone on the exim-users list that my IP was in a range blocked by Spamhaus PBL. Google said I need an SPF record http://articles.slicehost.com/2008/8/8/email-setting-a-sender-policy-framework-spf-record http://old.openspf.org/wizard.html http://www.kitterman.com/spf/validate.html The version of Exim that I could get from the Ubuntu package manager didn't support DKIM. So I uninstalled Exim and installed Postfix https://help.ubuntu.com/community/UbuntuTime http://www.sendmail.org/dkim/checker

    Read the article

  • Full Circle

    - by capgpilk
    Things have been a little bit hectic these past 6 months hence the lack of posts. My excuse is a good one though, my wife gave birth to our first son Tom back in September and it has been one hell of a rollercoaster ride since then. Things have settled back down now thank hevens.My last development gig didn't quite work out so now I have took the plunge and started contracting. It turns out my first contract is with the NHS trust that I started my development career with, which seems a bit wierd as that was 10 years ago. A lot has changed in the techniques and tools the NHS now use to develop with, there is a lot more .net with a slant towards the web side of the spectrum (at least in this NHS trust). They are really getting to grips with the MVC platform, so you will hopefully see some MVC posts coming up. The really suprising thing is that the Intranet I developed back in 2001 (classic asp migrated to .net 1.0) is still up and running and will finally be fazed out these coming weeks (to Sharepoint). It is like seeing an old friend all grown up. 

    Read the article

  • Crafty.js multiplayer platform game, keeping players in sync

    - by johnwards
    I'm using crafty.js to create a very simple platform game. It doesn't need to stop cheating, it's actually just seeing other players move around, and it doesn't need to have collision detection between players. They are "shadows". How I've gone about it so far is to use http://pubnub.com to send messages between clients. These messages are simple. The first if a new player arrival, the second is a key down and the third is a key up. The code is here: https://github.com/whiteoctober/craftyconcept However I've hit against the old chestnut of keeping everything in sync. At the moment I'm letting the each of the clients decide where to place the other players based on the received key events, I also only move "you" until I get a key press event back from pubsub. My thinking here is to try and keep things in sync! However it isn't perfect, http://www.whiteoctober.co.uk/john/gametest/, things can get out of sync very easily. Key presses arrive in the wrong order etc. Is there any simple solutions to this, I would like to keep it all client side (with pubnub) and not have a central server with positions etc if possible.

    Read the article

  • Are there any actual case studies on rewrites of software success/failure rates?

    - by James Drinkard
    I've seen multiple posts about rewrites of applications being bad, peoples experiences about it here on Programmers, and an article I've ready by Joel Splosky on the subject, but no hard evidence of case studies. Other than the two examples Joel gave and some other posts here, what do you do with a bad codebase and how do you decide what to do with it based on real studies? For the case in point, there are two clients I know of that both have old legacy code. They keep limping along with it because as one of them found out, a rewrite was a disaster, it was expensive and didn't really work to improve the code much. That customer has some very complicated business logic as the rewriters quickly found out. In both cases, these are mission critical applications that brings in a lot of revenue for the company. The one that attempted the rewrite felt that they would hit a brick wall at some point if the legacy software didn't get upgraded at some point in the future. To me, that kind of risk warrants research and analysis to ensure a successful path. My question is have there been actual case studies that have investigated this? I wouldn't want to attempt a major rewrite without knowing some best practices, pitfalls, and successes based on actual studies. Aftermath: okay, I was wrong, I did find one article: Rewrite or Reuse. They did a study on a Cobol app that was converted to Java.

    Read the article

  • Chrome Time Track Is a Simple Task Time Tracker

    - by ETC
    If you don’t need advanced project tracking but still want to track how long it takes you to finish certain tasks or projects, Chrome Time Tracker is a free Chrome extension with a dead simple interface. Install the extension and a Time Track icon appears in your toolbar. Click it to create new tasks, delete old tasks, and start, stop, and reset your task timers. When the timer is running the icon changes to indicate you’re on the clock; pause the timer and it toggles back to the default icon. Hit up the link below to grab a copy. Chrome Time Track is free and works wherever Google Chrome does. Chrome Time Track [Google Chrome Extensions] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • SQL SERVER 2008 – 2011 – Declare and Assign Variable in Single Statement

    - by pinaldave
    Many of us are tend to overlook simple things even if we are capable of doing complex work. In SQL Server 2008, inline variable assignment is available. This feature exists from last 3 years, but I hardly see its utilization. One of the common arguments was that as the project migrated from the earlier version, the feature disappears. I totally accept this argument and acknowledge it. However, my point is that this new feature should be used in all the new coding – what is your opinion? The code which we used in SQL Server 2005 and the earlier version is as follows: DECLARE @iVariable INT, @vVariable VARCHAR(100), @dDateTime DATETIME SET @iVariable = 1 SET @vVariable = 'myvar' SET @dDateTime = GETDATE() SELECT @iVariable iVar, @vVariable vVar, @dDateTime dDT GO The same should be re-written as following: DECLARE @iVariable INT = 1, @vVariable VARCHAR(100) = 'myvar', @dDateTime DATETIME = GETDATE() SELECT @iVariable iVar, @vVariable vVar, @dDateTime dDT GO I have started to use this new method to assign variables as I personally find it very easy to read as well write. Do you still use the earlier method to declare and assign variables? If yes, is there any particular reason or just an old routine? I am interested to hear about this. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • ASUS EPC 1015CX won't boot with display after installing additional drivers

    - by Dinesh
    I have a ASUS EPC 1015CX with Ubuntu 12.04 LTS installed using wubi alongside Windows 7. It worked perfectly after installing except that the supported screen resolution was only 800*600 instead of 1024*600. The specifications of the 1015CX device can be found here I tried installing additional drivers which was Broadcom for wireless and Cedarview-drm. By this time there was no display coming up after the boot. I could hear a voice and then the letters I typed got pronounced correctly. So, I guessed the issue has to be with the graphic terminal. I tried repair broken packages in the menu and booted and the display is now at 1024*600. But, if I restart normally the problem continues like before. As per what I found out after searching, I again opened additional drivers and installed Cedarview-graphics-driver. I tried restarting hoping it will work. But again, the result was same. There was no display but the system was working. I did more searching and followed the instructions found here. But still the result was same. I am keen to switch to linux from windows after hearing and learning great things about it. I know this community is good and always helping. I wanted to thank you all for the help in advance. Please let me know what I am doing wrong and help me to switch to linux from this same old windows.

    Read the article

  • What is the best approach for inline code comments?

    - by d1egoaz
    We are doing some refactoring to a 20 years old legacy codebase, and I'm having a discussion with my colleague about the comments format in the code (plsql, java). There is no a default format for comments, but in most cases people do something like this in the comment: // date (year, year-month, yyyy-mm-dd, dd/mm/yyyy), (author id, author name, author nickname) and comment the proposed format for future and past comments that I want is: // {yyyy-mm-dd}, unique_author_company_id, comment My colleague says that we only need the comment, and must reformat all past and future comments to this format: // comment My arguments: I say for maintenance reasons, it's important to know when and who did a change (even this information is in the SCM). The code is living, and for that reason has a history. Because without the change dates it's impossible to know when a change was introduced without open the SCM tool and search in the long object history. because the author is very important, a change of authors is more credible than a change of authory Agility reasons, no need to open and navigate through the SCM tool people would be more afraid to change something that someone did 15 years ago, than something that was recently created or changed. etc. My colleague's arguments: The history is in the SCM Developers must not be aware of the history of the code directly in the code Packages gets 15k lines long and unstructured comments make these packages harder to understand What do you think is the best approach? Or do you have a better approach to solve this problem?

    Read the article

  • Computer Bugs - Etymology and Entomology

    - by PointsToShare
    Whatever bugs you My wife and I used to take some of our summer vacation I a cabin on the shore of Lake Atsion in NJ. I t is a delightful place in the Wharton forest with Brown yet fresh water, where we would canoe, swim and enjoy true rest. Alas, in the last few years, yellow flies also discovered the area’s pastoral delights and came in hoards to bug us. So much so that we had to give up. As a computer programmer I abhor bugs. The bugs that bug me – except the pesky yellow flies – are program bugs , a specific variety of computer bugs. You can find an excellent take on the etymology of the word ‘bug” in this delightful monogram: http://www.jamesshuggins.com/h/tek1/first_computer_bug.htm In my youth, I worked on Burroughs computers. Unlike their IBM brethren, the Burroughs used a 96 column card. The cards were much smaller than the 80 column IBM cards. We wrote our programs on coding sheets and then a key-punch operator transcribed them into punched cards. These were fed into a card reader and compiled. The compiler would notify us of compiler errors or bugs, but it was not always easy to get the meaning of the message. My friend Mark Wildt, also a Burroughs veteran, gave me an old punched card from one of his programs. Obviously a bug!! Here It Is!! That’s All Folks!

    Read the article

  • Cannot install openjdk on Hardy Heron

    - by infaustus
    I know that Hardy Heron is very old but don't ask why Hardy... I've tried root@vz10931:/etc/apt# apt-get install openjdk-6-jre Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-6-jre: Depends: libasound2 (> 1.0.14) but it is not going to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libxtst6 but it is not going to be installed Depends: openjdk-6-jre-headless (>= 6b18-1.8.3-0ubuntu1~8.04.2) but it is not going to be installed vim: Depends: vim-common (= 1:7.1-138+1ubuntu3.1) but 2:7.3.154+hg~74503f6ee649-2ubuntu3 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). My sources.list deb http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse And root@vz10931:/etc/apt# ls -l sources.list.d/ total 0 Please help. When I've tried apt-get install -f I had install new system because everything crashed. Edit: I checked that i have openjdk installed root@vz10931:/var/www/mailer# dpkg --list | grep java iU sun-java6-bin 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar iU sun-java6-jdk 6.24-1build0.8.04.1 Sun Java(TM) Development Kit (JDK) 6 iU sun-java6-jre 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar but when i am trying to start java file: java -jar program.jar error appear -bash: java: command not found

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • What are some of the benefits of a "Micro-ORM"?

    - by Wayne M
    I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't have to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).

    Read the article

  • Ubuntu 14.04, only wallpaper and a mouse cursor is showing

    - by Abhishek Verma
    I upgraded to Ubuntu 14.04 weeks ago, that was a fresh install. And as usual installed, Unity tweak tool, some themes, Chrome, eclipse etc. Today, while playing with Unity tweak tool, I mistakenly switched Window spread to off, and bang, no problem yet. After realizing what I did I switched it to on again and then, the title bar of all windows, the status bar (guessing it's called that), and the launcher disappeared. Now I was left with a mouse cursor and the Unity tweak tool window, without the title bar. I thought a restart is needed, but sadly no shutdown option to click on, moreover, the keyboard didn't worked, neither the hardware shutdown button. So, I pressed the reset button. After a mundane restart, I was welcomed by the wallpaper and the mouse cursor, moving, that's it. Searched everywhere for a working solution, found some solutions in AskUbuntu itself, but they where all old and required terminal to be used, but since my keyboard isn't working, no access to the terminal. So, ofcource the question isn't a dublicate, at least, IMHO. Also, I was working on an Android project and have got no backups, so I won't be going for a clean install, or else I will start crying watching my weeks of work being deleted, any solutions, please.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >