Search Results

Search found 5304 results on 213 pages for 'money saving'.

Page 167/213 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • How to Speed Up Any Android Phone By Disabling Animations

    - by Chris Hoffman
    Android phones — and tablets, too — display animations when moving between apps and screens. These animations look very slick, but they waste time — especially on fast phones, which could switch between apps instantly if not for the animations. Disabling these animations will speed up navigating between different apps and interface screens on your phone, saving you time. You can also speed up the animations if you’d rather see them. Access the Developer Options Menu First, we’ll need to access the Developer Options menu. It’s hidden by default so Android users won’t stumble across it unless they’re actually looking for it. To access the Developer Options menu, open the Settings screen, scroll down to the bottom of the list, and tap the About phone or About tablet option. Scroll down to the Build number field and tap it repeatedly. Eventually, you’ll see a message appear saying “You are now a developer!”. The Developer options submenu now appears on the Settings screen. You’ll find it near the bottom of the list, just above the About phone or About tablet option. Disable Interface Animations Open the Developer Options screen and slide the switch at the top of the screen to On. This allows you to change the hidden options on this screen. If you ever want to re-enable the animations and revert your changes, all you have to do is slide the Developer Options switch back to Off. Scroll down to the Drawing section. You’ll find the three options we want here — Window animation scale, Transition animation scale, and Animator duration scale. Tap each option and set it to Animation off to disable the associated animations. If you’d like to speed up the animations without disabling them entirely, select the Animation .5x option instead. If you’re feeling really crazy, you can even select longer animation durations. You can make the animations take as much as ten times longer with the Animation 10x setting. The Animator duration scale option applies to the transition animation that appears when you tap the app drawer button on your home screen.  Your change here won’t take effect immediately — you’ll have to restart Android’s launcher after changing the Animator duration scale setting. To restart Android’s launcher, open the Settings screen, tap Apps, swipe over to the All category, scroll down, and tap the Launcher app. Tap the Force stop button to forcibly close the launcher, then tap your device’s home button to re-launch the launcher. Your app drawer will now open immediately, too. Now whenever you open an app or transition to a new screen, it will pop up as quickly as possible — no waiting for animations and wasting processing power rendering them. How much of a speed improvement you’ll see here depends on your Android device and how fast it is. On our Nexus 4, this change makes many apps appear and become usable instantly if they’re running in the background. If you have a slower device, you may have to wait a moment for apps to be usable. That’s one of the big reasons why Android and other operating systems use animations. Animations help paper over delays that can occur while the operating system loads the app.     

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • The Latest Major Release of AutoVue is Now Available!

    - by Pam Petropoulos
    Click here to read the full press release. To learn more about AutoVue 20.2, check out the What's New in AutoVue 20.2 Datasheet AutoVue 20.2 continues to set the standard for enterprise level visualization with Augmented Business Visualization, a new paradigm which reconciles information and business data from multiple sources into a single view, providing rich and actionable visual decision-making environments. The release also includes; capabilities that enhance end-to-end approval workflow; solutions to visually enable the mobile workforce; and support for the latest manufacturing and high tech formats.     New capabilities in release 20.2 include: ·         Enhancements to the Augmented Business Visualization framework o    Creation of 2D hotspots has been extended in 2D drawings, PDF and image files and can now be defined as regional boxes, rather than just text strings o    New 3D Hotspot links in models and drawings. Parts or components of 3D models can be selected to create hotspot links. ·         Enhanced end-to-end approval workflows with digital stamping and batch stamping improvements ·         Solutions that visually enable the mobile workforce and extend enterprise visualization to mobile devices, including iPads through OVDI (Oracle Virtual Desktop Infrastructure) ·         Enhancements to AutoVue enterprise readiness: reliability and performance improvements, as well as security enhancements which adhere to Oracle’s Software Security Assurance standards ·         Timely support for new MCAD, ECAD, and Office formats ·         New 20.2 versions of AutoVue Document Print Services and Integration SDK (iSDK) ·         New Dutch language availability   The press release also contains terrific supporting quotes from AutoVue customers and partners.        “AutoVue’s stamping enhancements will greatly benefit our building permit management processes,” said Ties Kremer, Information Manager, Noordenveld Municipality, Netherlands. “The ability to batch stamp documents will speed up our approval processes, enable us to save time and money, and help us meet our regulatory compliance obligations.”          “AutoVue provides our non-technical teams in marketing and sales with access to customer order requirements and supporting CAD documents and drawings,” said James Lim, Regional Technical Systems Manager at Molex Incorporated. “AutoVue 20.2 has enabled us to refine our quotation process, and reduce order errors.”         “We are excited about our use of AutoVue’s Augmented Business Visualization framework, which will offer Meridian users enhanced access to related technical documentation,” said Edwin van Dijk, Director of Product Management, BlueCielo.  “By including AutoVue’s new regional hotspot capabilities within BlueCielo Meridian Enterprise, the context of engineering information is carried over into the visual representation of complex assets, thereby helping us to improve productivity and operational excellence.”    

    Read the article

  • PASS Summit 2013 Review

    - by Ajarn Mark Caldwell
    As a long-standing member of PASS who lives in the greater Seattle area and has attended about nine of these Summits, let me start out by saying how GREAT it was to go to Charlotte, North Carolina this year.  Many of the new folks that I met at the Summit this year, upon hearing that I was from Seattle, commented that I must have been disappointed to have to travel to the Summit this year after 5 years in a row in Seattle.  Well, nothing could be further from the truth.  I cheered loudly when I first heard that the 2013 Summit would be outside Seattle.  I have many fond memories of trips to Orlando, Florida and Grapevine, Texas for past Summits (missed out on Denver, unfortunately).  And there is a funny dynamic that takes place when the conference is local.  If you do as I have done the last several years and saved my company money by not getting a hotel, but rather just commuting from home, then both family and coworkers tend to act like you’re just on a normal schedule.  For example, I have a young family, and my wife and kids really wanted to still see me come home “after work”, but there are a whole lot of after-hours activities, social events, and great food to be enjoyed at the Summit each year.  Even more so if you really capitalize on the opportunities to meet face-to-face with people you either met at previous summits or have spoken to or heard of, from Twitter, blogs, and forums.  Then there is also the lovely commuting in Seattle traffic from neighboring cities rather than the convenience of just walking across the street from your hotel.  So I’m just saying, there are really nice aspects of having the conference 2500 miles away. Beyond that, the training was fantastic as usual.  The SQL Server community has many outstanding presenters and experts with deep knowledge of the tools who are extremely willing to share all of that with anyone who wants to listen.  The opening video with PASS President Bill Graziano in a NASCAR race turned dream sequence was very well done, and the keynotes, as usual, were great.  This year I was particularly impressed with how well attended were the Professional Development sessions.  Not too many years ago, those were very sparsely attended, but this year, the two that I attended were standing-room only, and these were not tiny rooms.  I would say this is a testament to both the maturity of the attendees realizing how important these topics are to career success, as well as to the ever-increasing skills of the presenters and the program committee for selecting speakers and topics that resonated with people.  If, as is usually the case, you were not able to get to every session that you wanted to because there were just too darn many good ones, I encourage you to get the recordings. Overall, it was a great time as these events always are.  It was wonderful to see old friends and make new ones, and the people of Charlotte did an awesome job hosting the event and letting their hospitality shine (extra kudos to SQLSentry for all they did with the shuttle, maps, and other event sponsorships).  We’re back in Seattle next year (it is a release year, after all) but I would say that with the success of this year’s event, I strongly encourage the Board and PASS HQ to firmly reestablish the location rotation schedule.  I’ll even go so far as to suggest standardizing on an alternating Seattle – Charlotte schedule, or something like that. If you missed the Summit this year, start saving now, and register early, so you can join us!

    Read the article

  • Easy Made Easier - Networking

    - by dragonfly
        In my last post, I highlighted the feature of the Appliance Manager Configurator to auto-fill some fields based on previous field values, including host names based on System Name and sequential IP addresses from the first IP address entered. This can make configuration a little faster and a little less subject to data entry errors, particularly if you are doing the configuration on the Oracle Database Appliance itself.     The Oracle Database Appliance Appliance Manager Configurator is available for download here. But why would you download it, if it comes pre-installed on the Oracle Database Appliance? A common reason for customers interested in this new Engineered System is to get a good idea of how easy it is to configure. Beyond that, you can save the resulting configuration as a file, and use it on an Oracle Database Appliance. This allows you to verify the data entered in advance, and in the comfort of your office. In addition, the topic of this post is another strong reason to download and use the Appliance Manager Configurator prior to deploying your Oracle Database Appliance.     The most common source of hiccups in deploying an Oracle Database Appliance, based on my experiences with a variety of customers, involves the network configuration. It is during Step 11, when network validation occurs, that these come to light, which is almost half way through the 24 total steps, and can be frustrating, whether it was a typo, DNS mis-configuration or IP address already in use. This is why I recommend as a best practice taking advantage of the Appliance Manager Configurator prior to deploying an Oracle Database Appliance.     Why? Not only do you get the benefit of being able to double check your entries before you even start on the Oracle Database Appliance, you can also take advantage of the Network Validation step. This is the final step before you review all the data and can save it to a text file. It can be skipped, if you aren't ready or are not connected to the network that the Oracle Database Appliance will be on. My recommendation, though, is to run the Appliance Manager Configurator on your laptop, enter the data or re-load a previously saved file of the data, and then connect to the network that the Oracle Database Appliance will be on. Now run the Network Validation. It will check to make sure that the host names you entered are in DNS and do resolve to the IP addresses you specifiied. It will also ping the IP Addresses you specified, so that you can verify that no other machine is already using them (yes, that has happened at customer sites).     After you have completed the validation, as seen in the screen shot below, you can review the results and move on to saving your settings to a file for use on your Oracle Database Appliance, or if there are errors, you can use the Back button to return to the appropriate screen and correct the data. Once you are satisfied with the Network Validation, just check the Skip/Ignore Network Validation checkbox at the top of the screen, then click Next. Is the Network Validation in the Appliance Manager Configurator required? No, but it can save you time later. I should also note that the Network Validation screen is not part of the Appliance Manager Configurator that currently ships on the Oracle Database Appliance, so this is the easiest way to verify your network configuration.     I hope you are finding this series of posts useful. My next post will cover some aspects of the windowing environment that gets run by the 'startx' command on the Oracle Database Appliance, since this is needed to run the Appliance Manager Configurator via a direct connected monitor, keyboard and mouse, or via the ILOM. If it's been a while since you've used an OpenWindows environment, you'll want to check it out.

    Read the article

  • Auto update not working

    - by Mifas
    When I get new updates, I can't install them. When I try to install I get the following error message. installArchives() failed: Preconfiguring packages ... Preconfiguring packages ... Preconfiguring packages ... Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 191976 files and directories currently installed.) Preparing to replace resolvconf 1.63ubuntu11 (using .../resolvconf_1.63ubuntu14_all.deb) ... Unpacking replacement resolvconf ... Preparing to replace libutouch-geis1 2.2.9-0ubuntu2 (using .../libutouch-geis1_2.2.9-0ubuntu3_i386.deb) ... Unpacking replacement libutouch-geis1 ... Preparing to replace vino 3.4.1-0ubuntu1 (using .../vino_3.4.2-0ubuntu1_i386.deb) ... Unpacking replacement vino ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Processing triggers for gconf2 ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for libglib2.0-0 ... Setting up oracle-java7-installer (7u3-0~eugenesan~precise4) ... Downloading... --2012-05-23 19:40:37-- http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Resolving download.oracle.com (download.oracle.com)... 223.224.12.144, 223.224.12.146 Connecting to download.oracle.com (download.oracle.com)|223.224.12.144|:80... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz [following] --2012-05-23 19:40:38-- https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Resolving edelivery.oracle.com (edelivery.oracle.com)... 173.223.2.174 Connecting to edelivery.oracle.com (edelivery.oracle.com)|173.223.2.174|:443... connected. HTTP request sent, awaiting response... 302 Moved Temporarily Location: http://download.oracle.com/errors/download-fail-1505220.html [following] --2012-05-23 19:40:41-- http://download.oracle.com/errors/download-fail-1505220.html Connecting to download.oracle.com (download.oracle.com)|223.224.12.144|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5307 (5.2K) [text/html] Saving to: `./jdk-7u3-linux-i586.tar.gz' 0K ..... 100%% 52.9K=0.1s 2012-05-23 19:40:41 (52.9 KB/s) - `./jdk-7u3-linux-i586.tar.gz' saved [5307/5307] Download done. sha256sum mismatch jdk-7u3-linux-i586.tar.gz Oracle JDK 7 is NOT installed. dpkg: error processing oracle-java7-installer (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Setting up resolvconf (1.63ubuntu14) ... Setting up libutouch-geis1 (2.2.9-0ubuntu3) ... Setting up vino (3.4.2-0ubuntu1) ... Processing triggers for resolvconf ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: oracle-java7-installer Error in function:

    Read the article

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • Graphics trouble after resuming from hibernate or suspend

    - by Voyagerfan5761
    I have a Dell Inspiron 2650 (with NVidia graphics, using nouveau drivers) that I'm using to try out Ubuntu. It's all great, except that Hibernate and Suspend aren't usable. Yes, I know that questions about power-save issues are rampant in the Linux support universe, but it seems that every time I find a solution it's for a very specific hardware combination and doesn't apply to me. So anyway, here goes. When I resume from either power-saving mode, I'll get graphics problems anywhere on the range from a few scattered random-colored pixels that won't change; all the way to full-screen patterns that don't change as I move the mouse, hit keys on the keyboard, or even bring up the shutdown dialog using the power button. Those full-screen issues (which may involve stripes with random pixels, partial black screen, or both) always end in me forcing the machine to shut down by holding the power button. I haven't done much testing yet to determine what severity level is most commonly associated with each mode, but I do avoid using either power-save option because of these issues. I'll add info on my hardware as I can gather it (no home internet connection, and this laptop is tethered to my desk by a dead battery and casing degradation). Please feel free to request something specific in the question comments. Hardware Info See this hardinfo report for my system's hardware configuration. (No, my username is not "myuser"; I sanitized hardinfo's output before publishing it.) Screenshots These screenshots are from a relatively mild occurrence, which happened after the second hibernation I took that session. The first one worked great, though I used the wireless card and Firefox heavily between the two hibernation attempts. Take a look at what happened when I opened my home directory in Nautilus and scrolled it: See below for the situations I've tested so far. The real trouble comes when the machine resumes to an unusable state; in such cases I can't even unlock the screen or properly reboot, much less take a screenshot. I have a hunch that putting a CD in the drive will cause such major failures, and I will try that at some point; see related question. Situations Tested Maverick (10.10) Suspend Seems to suspend nicely with nothing running Seems to suspend nicely with flash drive plugged in On resume from suspend with no flash drive, Terminal and gedit running: Funky graphics on top of log output, then blank screen with pixelated cursor; no response to power button (normally will shutdown 60 seconds later) Hibernate Seems to hibernate nicely with nothing running Seems to hibernate nicely with a few apps (Terminal, Mouse preferences) running Seems to not hibernate when flash drive plugged in Seems to not hibernate when System Monitor is running Have encountered failed hibernation (after several hours and one successful hibernate/thaw cycle) with no external media connected and no programs running except normal background stuff Natty LiveCD (11.04_2010-12-22) When I tested it, Natty wouldn't stay logged in. It played part of the login sound and then [ OK ] appeared in the top right corner (white-on-black terminal text) for a few seconds. Then it kicked me back to the Unlock screen. It did that four times before I gave up and just tested suspend from the Unlock screen. Suspend Resumed to vertical gray and black lines 2px (?) wide, then shifted to vertical "jail bars" of black over a black screen with above-described random pixels and mouse pointer. No apparent response to input from mouse (clicking randomly). Keyboard and touchpad unrecognized.

    Read the article

  • Any tips on getting hired as a software project manager straight out of college?

    - by MHarrison
    I graduated with a BS in compsci last September, and I've been trying (unsuccessfully) to find a job as a project manager ever since. I fell in love with software engineering (the formal practice behind it all, not just coding) in school, and I've dedicated the last 3-4 years of my life to learning everything I can about project management and gaining experience. I've managed several projects (with teams around 12 people) while in school, and I worked with my university's software engineering research lab. My résumé is also decent - I worked as a programmer before I went to school (I'm 27 now), and I did Google Summer of Code for 3 summers. I also have general "people management" experience via working as the photo editor for my university's newspaper for 2 years. My first problem with the job hunt is not getting enough interviews. I use careers.stackoverflow.com, which is awesome because I usually get contacted by non-HR people who know what they're talking about, but there's just not enough companies using it for me to get interviews on a regular basis. I've also tried sites like monster.com, and in a fit of desperation, I sent out no less than 60 applications to project management positions. I've gotten 3 automated rejection letters and that's it. At least careers.stackoverflow gets me a phone interview with 8/10 places I apply to. But the main (and extremely frustrating) problem is the matter of experience. I've successfully managed projects from start to finish (in my software engineering classes we had real customers come in with a real software need and we built it for them), but I've never had to deal with budgets and money (I know this is why HR people immediately turn me away). Most of these positions require 5+ years PM experience, and I've seen absurd things like 12+ years required. Interviews are also maddening. I've had so many places who absolutely loved me and I made it to the final round of interviews, and I left thinking things went extremely well and they'd consider me. However, when I check in with them a week later, they tell me "We really liked you and your qualifications are excellent, but we're hoping to find someone with more experience." The bad interviews I can understand - like the PM position that would have had me managing developers both locally and overseas - I had 3 interviews with them and the ENTIRE interview process was them asking me CS brainteasers and having me waste time on things like writing quicksort on paper or writing binary search trees. Even when I tried steering the discussion towards more relevant PM stuff, they gave me some vague generic replies and went back to the "We want to be Google/MS" crap. But when I have a GOOD interview, they say my "qualifications are excellent" but they want "more experience"...that makes me want to tear my hair out. What else can I DO? While I'm aiming for technically-involved PM positions (not just crunching budget numbers), I really don't want a straight development job because I like creating software from the very high-level vs. spending a lot of time debugging memory leaks. In fact, I can't even GET development positions that I'm qualified for because I make the mistake of telling them that my future career goals are as PM (which usually results in them saying something like "Well we already have PMs and this position isn't really set up to get you there." - which I take to mean "No, that's my job, stay away.") My apologies on the long rant, but I'm seriously hellbent on getting hired as a PM since it's both my career goal and the passion that keeps me awake at night. Any suggestions on what the heck else I can do? I'm currently writing a blog where I talk about my philosophies about software engineering, and I'm writing up specs for an iOS app which I will design, code, and show employers, but this takes an awful lot of time that I don't have.

    Read the article

  • Bumblebee [ERROR]Cannot access secondary GPU - error: [XORG]

    - by Lunchbox
    Though this may seem like a duplicate question, none of the suggestions I've seen have worked for me, however nearly all posters get good results. I'll start with hardware: Metabox W350ST notebook Intel Core i7 4700 16GB RAM GTX 765M (with Optimus) 128GB SSD 1TB SSHD My initial error output when trying to optirun a game is: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [133.973920] [ERROR]Aborting because fallback start is disabled. If anything else is needed to troubleshoot this just let me know. Adding bumblebee.conf: # Configuration file for Bumblebee. Values should **not** be put between quotes ## Server options. Any change made in this section will need a server restart # to take effect. [bumblebeed] # The secondary Xorg server DISPLAY number VirtualDisplay=:8 # Should the unused Xorg server be kept running? Set this to true if waiting # for X to be ready is too long and don't need power management at all. KeepUnusedXServer=false # The name of the Bumbleblee server group name (GID name) ServerGroup=bumblebee # Card power state at exit. Set to false if the card shoud be ON when Bumblebee # server exits. TurnCardOffAtExit=false # The default behavior of '-f' option on optirun. If set to "true", '-f' will # be ignored. NoEcoModeOverride=false # The Driver used by Bumblebee server. If this value is not set (or empty), # auto-detection is performed. The available drivers are nvidia and nouveau # (See also the driver-specific sections below) Driver=nvidia # Directory with a dummy config file to pass as a -configdir to secondary X XorgConfDir=/etc/bumblebee/xorg.conf.d ## Client options. Will take effect on the next optirun executed. [optirun] # Acceleration/ rendering bridge, possible values are auto, virtualgl and # primus. Bridge=auto # The method used for VirtualGL to transport frames between X servers. # Possible values are proxy, jpeg, rgb, xv and yuv. VGLTransport=proxy # List of paths which are searched for the primus libGL.so.1 when using # the primus bridge PrimusLibraryPath=/usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus # Should the program run under optirun even if Bumblebee server or nvidia card # is not available? AllowFallbackToIGC=false # Driver-specific settings are grouped under [driver-NAME]. The sections are # parsed if the Driver setting in [bumblebeed] is set to NAME (or if auto- # detection resolves to NAME). # PMMethod: method to use for saving power by disabling the nvidia card, valid # values are: auto - automatically detect which PM method to use # bbswitch - new in BB 3, recommended if available # switcheroo - vga_switcheroo method, use at your own risk # none - disable PM completely # https://github.com/Bumblebee-Project/Bumblebee/wiki/Comparison-of-PM-methods ## Section with nvidia driver specific options, only parsed if Driver=nvidia [driver-nvidia] # Module name to load, defaults to Driver if empty or unset KernelDriver=nvidia PMMethod=auto # colon-separated path to the nvidia libraries LibraryPath=/usr/lib/nvidia-current:/usr/lib32/nvidia-current # comma-separated path of the directory containing nvidia_drv.so and the # default Xorg modules path XorgModulePath=/usr/lib/nvidia-current/xorg,/usr/lib/xorg/modules XorgConfFile=/etc/bumblebee/xorg.conf.nvidia ## Section with nouveau driver specific options, only parsed if Driver=nouveau [driver-nouveau] KernelDriver=nouveau PMMethod=auto XorgConfFile=/etc/bumblebee/xorg.conf.nouveau DRIVER VERSION - Output of jockey-text -l: nvidia_304_updates - nvidia_304_updates (Proprietary, Enabled, Not in use)

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • Get Ready for Anytime, Anywhere Engagement

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are you ready for 2015?  According to IDC, 2015 is the year when more users are projected to access the internet using mobile devices than with PC’s or other wired devices.  It’s no doubt that mobile devices are a critical means of communication today, and are on track to become increasingly more important in the coming years. However, device formats are so varied that delivering a mobile web experience that will engage site visitors and enhance your brand can be a daunting task. Solutions that empower organizations to easily extend their web presence to the mobile channel, while saving significant time and effort in managing mobile sites, are now essential in our ever connected mobile world. So what are some of the things organizations should look for in such a solution? Mobile device form factors, networks, protocols, and browsers vary widely, and reformatting web content for thousands of different device and software combinations is a prohibitive task. An effective mobile solution can make this process seamless by automatically formatting designated web content for mobile delivery.  By automatically detecting a site visitor’s device configuration, the selected web content can be sized and formatted for optimal display on that particular device. This can save tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. It’s not enough to simply support the thousands of different mobile device types that are out there. It’s also critical to make it easy for marketers and other business users to manage mobile sites and mobile content. Those responsible for maintaining an organization’s web and mobile experiences need the ability to edit content using rich text editor tools and then preview that content directly in the context of the mobile website and the traditional website, ideally from the same business user interface. Powerful capabilities such as these make managing the web experience for mobile devices easy, even with frequently changing content, across a multitude of different devices. This saves tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. When content or business needs change, the business user needs only to change site content once, and it is seamlessly deployed to the web and all mobile channels.Geo-location is another critical input to making the online experience engaging and relevant for web visitors who are increasingly mobile. A mobile solution should enable use of device GPS data to deliver location-based content and services to mobile website visitors. Organizations can provide mobile site visitors with location-sensitive search results, location-based offers and recommendations, integration of maps and directions into site content, and much more – all critical for meeting the needs of those on the go.To hear more about how mobile is changing the game, check out our recent webcast with Ted Schadler, Vice President, Principal Analyst, Forrester, where he discussed why mobile is the new face of engagement, or learn more about how to extend your web presence to the mobile channel with Oracle WebCenter Sites and Oracle WebCenter Sites Mobility Server.

    Read the article

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

  • Refactoring existing PHP Project. I need some advices

    - by b0x
    i have a small SAS ERP that was written some years ago using PHP. At that time, it didn't used any framework, but the code isn't a mess as i will explain more detailed in the following lines. Nowadays, the project grow and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such Laravel. Although I'd love trying Laravel, I’m a small business and i don't have time/money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers I must separate the "view", because: I want to release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) Release different versions to fit mobile/tablet. Make different types of this product, seeling packages as if it were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing rotines, to quality assurance. My actual structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysqli on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s fies outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But i want to write a manual of our rules. Something that i can give to any new programmer at my companie and he can go on. This is not totally a mess, but It could be better seeing the new practices. What could I do to separate this as MVC, to have multiple VIEW’s. Could you gimme some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc. It would be nice. Thanks!

    Read the article

  • Seeking advice on tools and technology for my new game [closed]

    - by k.k. slider
    I'm a C# developer who has been programming a game in my spare time using XNA and Visual Studio. The game's logic is mostly done and I've completed a prototype that has most of the functionality of (what I envision to be) the final game. However, having heard about the uncertain future and (possibly) limited audience for XNA games, I'm looking to switch platforms... but I don't know what technology would best suit my needs. Below are some specifics about my game and what exactly I'm looking for, if you're interested: The game is a 2D turn-based tactical RPG (strategy game) for two players. It is a basic sprite and tile based game with animations and sound. 3D capabilities are not necessary. I'd like to allow players to compete with others online, and have a basic ranking/matchmaking system. I will probably need something that can interact with a server and a database (the game is turn-based and has no RNG, so cheating would be easy to detect even if most computation is done client-side and minimal data is sent to the server). Ideally, I would be able to release an early version of the game and have people give feedback as I develop additional features (similar to Minecraft). I'd prefer to have a way to release periodic updates to the game instead of releasing an absolute final product. To reach the widest possible audience, I'd prefer technology that allows me to release on PC, Android, iOS, and (maybe) Mac. This is a game with simple mouse inputs which can fit on a mobile touch screen. The game should be monetizable. If I find success with this game, then I may consider becoming a full-time indie game developer. I have several other game ideas and have learned quite a bit from my first attempt at game development. My first thought was an F2P/microtransaction model, but I'm open to other suggestions. Language isn't a primary concern of mine, since I have a decent amount of experience using several languages to program large projects. I'm willing to spend money (e.g. on a developer's license), but the more expensive it gets, the more hesitant I am to use it. I've looked into the following solutions... there are a LOT of tools out there... if anyone has experience with any of these and would like to recommend/reject any of them, it would be helpful. C#/.NET (XNA/MonoGame/SDL/SlimDX/Xamarin/ExEn/ANX?) HTML5/JS (AppMobi/PhoneGap/Marmalade/FlashCanvas/Cordova/libRocket?) Python (Pyglet/Pygame/Kivy?) Java (JavaFX/libGDX?) Unity/Construct 2/Cocos2D/NME/Corona/other game creation software? I'd like something that can do 2D and isn't limited by being too high-level. Other languages (Lua/LOVE? Moai?) Thanks for answering this rather long and tedious question...

    Read the article

  • How can a developer realize the full value of his work [closed]

    - by Jubbat
    I, honestly, don't want to work as a developer in a company anymore after all I have seen. I want to continue developing software, yes, but not in the way I see it all around me. And I'm in London, a city that congregates lots of great developers from the whole world, so it shouldn't be a problem of location. So, what are my concerns? First of all, best case scenario: you are paying managers salary out of yours. You are consistently underpaid by making up for the average manager negative net return plus his whole salary. Typical scenario. I am a reasonably good developer with common sense who cares for readable code with attention to basic principles. I have found way too often, overconfident and arrogant developers with a severe lack of common sense. Personally, I don't want to follow TDD or Agile practices like all the cool kids nowadays. I would read about them, form my own opinion and take what I feel is useful, but don't follow it sheepishly. I want to work with people who understand that you have to design good interfaces, you absolutely have to document your code, that readability is at the top of your priorities. Also people who don't have a cargo cult mentality too. For instance, the same person who asked me about design patterns in a job interview, later told me that something like a List of Map of Vector of Map of Set (in Java) is very readable. Why would someone ask me about design patterns if they can't even grasp encapsulation? These kind of things are the norm. I've seen many examples. I've seen worse than that too, from very well paid senior devs, by the way. Every second that you spend working with people with such lack of common sense and clear thinking, you are effectively losing money by being terribly inefficient with your time. Yet, with all these inefficiencies, the average developer earns a high salary. So I tried working on my own then, although I don't like the idea. I prefer healthy exchange of opinions and ideas and task division. I then did a bit of online freelancing for a while but I think working in a sweatshop might be more enjoyable. Also, I studied computer engineering and you are in an environment in which your client will presume you don't have any formal education because there is no way to prove it. Again, you are undervalued. You could try building a product, yes. But, of course, luck is a big factor. I wonder if there is a way to work in something you can do well, software development, and be valued for the quality of your work and be paid accordingly, and where you and only you get fairly paid for the value you generate. I know that what I have written seems somehow unlikely but I strongly feel this way. Hopefully someone will understand me and has already figured this out. I don't think I'm alone in this kind of feeling.

    Read the article

  • No Wireless Networks, BCM4313 [duplicate]

    - by TalonPlz
    This question already has an answer here: How to Install Broadcom Wireless Drivers (BCM43xx) 38 answers Just bought this little Asus 10 inch laptop that came with Ubuntu 12.04. Everything at my home was fine: Wireless identified and connected. As soon as I went to my girlfriend's house the trouble started. I couldn't connect to wireless (authentication... times out and asks for authentication) I started doing internet searching, tried a few solutions posted on line using terminal commands. No solutions. I decided to upgraded to 12.10-13.04 and that left me with a worse problem: I can no longer see ANY networks what so ever. Wireless card is ON, with out a doubt. Wired connection works. I have been fumbling with driver versions to no .avail, and have no idea which driver I am currently running I have a vague idea of what terminal lines to run: lshw: resources: irq:17 memory:f7d00000-f7dfffff *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth2 version: 01 serial: dc:85:de:56:c4:ea width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.20.155.1 (r326264) latency=0 multicast=yes wireless=IEEE 802.11abg resources: irq:17 memory:f7d00000-f7d03fff iwconfig: eth1 no wireless extensions. eth2 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off lo no wireless extensions. I am new and excited to start my Ubuntu and Linux life and this is only the first of my few hic cups i am sure! :) Thanks all UPDATE: Report from 2nd answer talon@Black1015E:~$ sudo apt-get remove --purge bcmwl-kernel-source [sudo] password for talon: Reading package lists... Done Building dependency tree Reading state information... Done Package 'bcmwl-kernel-source' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. talon@Black1015E:~$ wget http://us.archive.ubuntu.com/ubuntu/pool/restricted/b/bcmwl/bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb --2013-10-22 18:50:32-- http://us.archive.ubuntu.com/ubuntu/pool/restricted/b/bcmwl/bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb Resolving us.archive.ubuntu.com (us.archive.ubuntu.com)... 2001:67c:1562::15, 2001:67c:1562::13, 2001:67c:1562::14, ... Connecting to us.archive.ubuntu.com (us.archive.ubuntu.com)|2001:67c:1562::15|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1181334 (1.1M) [application/x-debian-package] Saving to: ‘bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb’ 100%[======================================>] 1,181,334 3.37MB/s in 0.3s 2013-10-22 18:50:33 (3.37 MB/s) - ‘bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb’ saved [1181334/1181334] talon@Black1015E:~$ arvh No command 'arvh' found, did you mean: Command 'arch' from package 'coreutils' (main) arvh: command not found talon@Black1015E:~$ arch x86_64 talon@Black1015E:~$ sudo dpkg -i bcmwl*.deb Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 171895 files and directories currently installed.) Unpacking bcmwl-kernel-source (from bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.112+bdcom-0ubuntu3) ... Loading new bcmwl-5.100.82.112+bdcom DKMS files... First Installation: checking all kernels... Building only for 3.8.0-32-generic Building for architecture x86_64 Building initial module for 3.8.0-32-generic Done. wl: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.8.0-32-generic/updates/dkms/ depmod........ DKMS: install completed. Error: Module b43 is not currently loaded Error: Module b43legacy is not currently loaded Error: Module ssb is not currently loaded Error: Module bcm43xx is not currently loaded Error: Module brcm80211 is not currently loaded Error: Module brcmfmac is not currently loaded update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-32-generic rebooting now

    Read the article

  • Why won't my code work in Ubuntu Server 11.10? Is it because of gd library?

    - by Derrick
    I get this error when running the following code: No such file found at "widgets/104-text.png" I know that the code works because it works on my other non-ubuntu server. I do not know if it is gd library or what. I tried both the bundled version and the non-bundled and both do not make this code work. $con = mysql_connect("localhost","user","abc123"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("satabase_name", $con); $productid2 = $this->product->id; $thename = mysql_query("SELECT * FROM pshop_product_lang WHERE id_product = '$productid2' LIMIT 1"); $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png'); $getimg110 = mysql_query("SELECT * FROM pshop_image WHERE id_product = '$productid2'"); $gotimg110 = mysql_fetch_array($getimg110); $slash110 = addcslashes($gotimg110[id_image], '\0..\999999999999999999999'); $str110 = str_replace('\\', '/', $slash110); $newimg110 = '<img src="img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg" />'; include("conf.inc.php"); include('ImageWorkshop.php'); // Initialization of layer you need $pinguLayer = new ImageWorkshop(array( 'imageFromPath' => 'widgets/background.png', )); $pinguLayer2 = new ImageWorkshop(array( 'imageFromPath' => 'img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg', )); $pinguLayer3 = new ImageWorkshop(array( 'imageFromPath' => 'widgets/' . $productid2 . '-text.png', )); // resize pingu layer $thumbWidth2 = 150; // px $thumbHeight2 = 150; $thumbWidth = 400; // px $thumbHeight = 200; $pinguLayer2->resizeInPixel($thumbWidth2, $thumbHeight2); $pinguLayer->resizeInPixel($thumbWidth, $thumbHeight); // Add 2 layers on pingu layer $pinguLayer->addLayerOnTop($pinguLayer2, null, null, 'LM'); $pinguLayer->addLayerOnTop($pinguLayer3, 70, 25, 'MM'); // Saving the result in a folder $pinguLayer->save("widgets/", $productid2 . ".gif", true, null, 95); The file path is correct, however this part of the code is not creating the image as it is supposed to: $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png');

    Read the article

  • DBA Best Practices - A Blog Series: Episode 2 - Password Lists

    - by Argenis
      Digital World, Digital Locks One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access. As a DBA, you are very likely to manage some of these assets for your company - and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc. Have you given some thought to what you're doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them? Good-Ol’ Post-It Notes Under Your Keyboard If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel's level of encryption is good for your grandma's budget spreadsheet, not for a list of enterprise passwords. I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA's workstations. Now I have to break down that statement to you: - Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read? - Access controlled: This one is pretty much self-explanatory. - Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you. - Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring. - Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is? I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server's native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part. Enter The World Of Utilities There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone. Other solutions I've used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices. There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don't need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well. What are you doing to safeguard your passwords? Leave a comment below, and join the discussion! Cheers, -Argenis

    Read the article

  • Top Three Reasons to Move to the Cloud Before Your Next Upgrade

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} 1) Reduced Cost - During major upgrades, most organizations typically need to replace or invest in extra hardware and other IT resources to support the upgrade. With the Cloud, this can become more of an Op-ex discussion. The flexibility and scalability of the cloud also allows for new business solution to be set up more quickly with the ability to scale IT resources to closely map to changing business requirements. . This enables more and faster innovation because you are spending money to focus on core business initiatives instead of setting up complex environments. 2) Reduced Risk- This is especially true when you are working with a cloud provider that possesses substantial in-house expertise. Oracle Managed Cloud Services has been hosting and managing customer’s business applications for over a decade and has help hundreds of customers upgrade and adopt new technologies faster and better. Customer have access to over 15,000 Oracle experts in operation centers around the world that can work around the clock and have direct access Oracle Development to optimize our customers’ upgrade experience. 3) Reduced Downtime - Whether a customer is looking to upgrade their E-Business Suite, PeopleSoft, JD-Edwards, or Fusion applications, we’ve developed standardized best practices and tools across the technology stack to accelerate the upgrade and migration with substantially reduced timelines and risk. And because the process is repeatable, customer stay more current on the latest releases, continuously taking advantage of the newest innovations – without the headache.. By leveraging the economies and expertise of scale that belong to Oracle, you can sleep better at night knowing that your next major application upgrade is taken care of. Check out the video of this Managed Cloud Services customer to learn more about their experience.

    Read the article

  • What are they buying &ndash; work or value?

    - by Jamie Kurtz
    When was the last time you ordered a pizza like this: “I want the high school kid in the back to do the following… make a big circle with some dough, curl up the edges, then put some sauce on it using a small ladle, then I want him to take a handful of shredded cheese from the metal container and spread it over the circle and sauce, then finally I want the kid to place 36 pieces of pepperoni over the top of the cheese” ?? Probably never. My typical pizza order usually goes more like this: “I want a large pepperoni pizza”. In the world of software development, we try so hard to be all things agile. We: Write lots of unit tests We refactor our code, then refactor it some more We avoid writing lengthy requirements documents We try to keep processes to a minimum, and give developers freedom And we are proud of our constantly shifting focus (i.e. we’re “responding to change”) Yet, after all this, we fail to really lean and capitalize on one of agile’s main differentiators (from the twelve principles behind the Agile Manifesto): “Working software is the primary measure of progress.” That is, we foolishly commit to delivering tasks instead of features and bug fixes. Like my pizza example above, we fall into the trap of signing contracts that bind us to doing tasks – rather than delivering working software. And the biggest problem here… by far the most troubling outcome… is that we don’t let working software be a major force in all the work we do. When teams manage to ruthlessly focus on the end product, it puts them on the path of true agile. It doesn’t let them accidentally write too much documentation, or spend lots of time and money on processes and fancy tools. It forces early testing that reveals problems in the feature or bug fix. And it forces lots and lots of customer interaction.  Without that focus on the end product as your deliverable… by committing to a list of tasks instead of a list features and bug fixes… you are doomed to NOT be agile. You will end up just doing stuff, spending time on the keyboard, burning time on timesheets. Doing tasks doesn’t force you to minimize documentation. It makes it much harder to respond to change. And it will eventually force you and the client into contract haggling. Because the customer isn’t really paying you to do stuff. He’s ultimately paying for features and bug fixes. And when the customer doesn’t get what they want, responding with “well, look at the contract - we did all the tasks we committed to” doesn’t typically generate referrals or callbacks. In short, if you’re trying to deliver real value to the customer by going agile, you will most certainly fail if all you commit to is a list of things you’re going to do. Give agile what it needs by committing to features and bug fixes – not a list of ToDo items. So the next time you are writing up a contract, remember that the customer should be buying this: Not this:

    Read the article

  • Why Apple’s New SDK Limitation is So Offensive

    - by TStewartDev
    I am not an Apple fanboy, nor have I ever been. However, I have owned a Mac, an iPod, and an iPhone in my lifetime, and for more than a decade, I have defended Apple against the untruths that the haters so enjoy spewing. I encouraged my wife to buy a MacBook when she needed a new laptop two years ago, and I often recommend them to my friends and relatives. I have proudly and happily used my first generation iPhone for nearly three years. Now, for the first time in well over ten years, I find myself ready to swear off Apple and encourage everyone I know to do the same. I was disappointed when Apple wouldn't allow native apps, but I still bought the iPhone. I've stomached their ambiguous app approval process even though it's apparent that Steve may just reject your app because he doesn't like you or feels threatened by you (I'm still lamenting the rejection of the Google Voice app). But, as a developer, I can no longer tolerate Apple's terms and the kind of totalitarian control they indicate Apple wants. In case you are not already familiar, Apple has dictated in their OS 4.0 SDK license agreement (the now infamous Section 3.3.1) that all apps developed for the iPhone must be coded in C, C++, or Objective C, and moreover, that using any cross-compiling platforms is a violation of the agreement. For those of you who aren't developers, let me try to illustrate why this angers those of us who are. Imagine you're a professional writer. You've had articles published in some journals and magazines, and you've got a couple popular books out there, too. You've got an idea for a new book, and so you take it to your publisher. Your publisher agrees that it's a good idea. "But," says the publisher, "we want to hold our books to a tighter standard so that our readers get the experience we want them to have. Therefore, from now on, all our writers may only use words from this list of the 10,000 most common English words. Furthermore, if you cite any other works or quote anyone, they must comply with that same list, or you'll have to rewrite the entire work as well in case our readers want to look up your citation." What do you do? If your work is a children's book, this probably isn't a big deal to you. If it's an autobiography, textbook, or even a novel, though, you're going to have a lot of trouble describing your content with only common words. It's going to take you longer to complete your book, too, since you'll be looking up less common words frequently to see if you can use them. You could always go to another publisher, but this one has the best ability to distribute your book. The next largest distributor can only do a quarter as much. You could abandon the project altogether, but then everyone loses. Isn't this a silly scenario? Who would put such a limitation on writers? Yet this is very much what Apple is doing. They are using their dominant position in the market to coerce developers to write their apps exclusively for the iPhone OS by making it too expensive to write for multiple platforms. It is at least a threefold attack, striking at Adobe who is set to release a compiler that lets Flash source be compiled to iPhone binaries; striking at Google whose Android platform stands the best chance at the moment of providing serious competition to the iPhone; and reinforcing their own strong position by keeping popular apps exclusively to iPhone. And while developers are already very upset about this, the sad fact is that most of us will cave and give in to Apple because consumers don't know any better. They will continue to buy Apple's toy forcing developers to play Apple's maniacal game in order to make any money, at least until Steve Jobs decides he doesn't like them or he intends to release a competing application (bye-bye OpenFeint). Apple has been kept in check on the desktop front by a very dominant Microsoft, but I'm afraid that their success with iPods, iTunes, and iPhones has created a monster that we may have to bear until it is slain by an anti-trust suit or dies with the retirement of Steve Jobs.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >