Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 890/1381 | < Previous Page | 886 887 888 889 890 891 892 893 894 895 896 897  | Next Page >

  • SPARC Solaris Momentum

    - by Mike Mulkey-Oracle
    Following up on the Oracle Solaris 11.2 launch on April 29th, if you were able to watch the launch event, you saw Mark Hurd state that Oracle will be No. 1 in high-end computing systems "in a reasonable time frame”.  "This is not a 3-year vision," he continued.Well, According to IDC's latest 1QCY14 Tracker, Oracle has regained the #1 UNIX Shipments Marketshare! You can see the report and read about it here: Oracle regains the #1 UNIX Shipments Marketshare, but suffice to say that SPARC Solaris is making strong gains on the competition.  If you have seen the public roadmap through 2019 of Oracle's commitment to continue to deliver on this technology, you can see that Mark Hurd’s comment was not to be taken lightly.  We feel the systems tide turning in Oracle's direction and are working hard to show our partner community the value of being a part of the SPARC Solaris momentum.We are now planning for the Solaris 11.2 GA in late summer (11.2 beta is available now), as well as doing early preparations for Oracle OpenWorld 2014 on September 28th.  Stay tuned there!Here is a sampling of the coverage highlights around the Oracle Solaris 11.2 launch:“Solaris is still one of the most advanced platforms in the enterprise.” – ITBusinessEdge“Oracle is serious about clouds now, just as its customers are, whether they are building them in their own datacenters or planning to use public clouds.” – EnterpriseTech"Solaris is more about a layer of an integrated system than an operating system.” — ZDNet

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • Geekswithblogs.net Influencer Programm

    - by Staff of Geeks
    Recently, @StaffOfGeeks announced to a select group of bloggers, the Geekswithblogs.net Influencer Program.  Here is a little detail about the program. Description (from Influencer Page): Geekswithblogs.net is a community of bloggers passionate about contributing information to the world of developers and IT professionals. Our bloggers are some of the best in the world and receive honors on a regular basis for outside companies (such as the Microsoft© MVP Program). The Geekswithblogs.net Influencer Program is our way as Staff of Geeks to show our appreciation for those bloggers who have the greatest influence on the site. Each influencer in the program is awarded by the amount of posts, views, and comments they receive on their posts in the previous quarter. Each quarter, we select the top 25 bloggers of influence and give them special access to products and services we have obtained through our network of partners. Here is how it works.  Each quarter we select the top 25 bloggers bases on the amount of posts that created in that quarter, and apply points to the views and comments those posts receive.  The selection is purely off of the numbers, we do not select any based on any other basis.  In fact in the first round, several of our key bloggers did not qualify in the top 25.  Though they are still loved dearly, we wanted a program that anyone could be a part of if they put in the hard work. This said, the first quarter ends at the end of March and we will have another round of influencers joining us.  Keep the posts rolling and maybe you will be selected as an influencer! Visit the influencers page to see who has the greatest influence on Geekswithblogs right now.   Technorati Tags: Geekswithblogs

    Read the article

  • Agile Data Book from O'Reilly Media

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/07/01/153309.aspxAs part of my ongoing self-education and approaching of some free time, yeah, both is a must for every IT person and geek! I have carefully examined the latest trends in the Computersphere with whatever tools I had at my disposal (nothing really fancy was used) and came to a conclusion that for a database pro the *hottest* topic today is undoubtedly the #BigData and all the rapidly growing and spawning ecosystem around it. Having recently immersed myself into the NoSQL world (let me tell here right away NoSQL means Not Only SQL) one book really stood out of the crowd: Book site: http://shop.oreilly.com/product/0636920025054.doDespite being a new book I am sure it will end up on the tables of many Big Data Generalists.In a few dozen words, it is primarily for two reasons:1) The author understands that a  typical business today cannot wait for a Data Scientist for too long to deliver results demanding as usual a very quick turnaround on investments (ROI), and 2) The book covers all the needed and proven modern brick and mortar offerings to get the job done by a relatively newcomer to the Big Data World.It certainly enables such a professional to grow and expand based on the acquired knowledge, and one can truly do it very fast.

    Read the article

  • How to view/mount other partitions on your hard drive

    - by Preston Zacharias
    Recently I have installed Ubuntu 12.04 Beta 2 on a USB flash drive and decided to install it on an old external HDD which I have taken out of the casing and succesfully mounted in my desktop computer. There is no other operating system besides the newly install Ubuntu. However, there is about 500gb of data on the drive. This is why i used a partitioning software on my windows 7 netbook to partition the hard drive to set aside 1tb for files, 350gb of space for linux and the remaining 650gb for Vista which i plan on installing soon. But this is where the problem sets in...when installing Ubuntu it does not recognize that the drive is partitioned at all, it's just one big open block of space...so I used the installers built in partitioning feature to set aside 300gb for main Ubuntu install and 50gb for swap space. I set both of these partitions to be created at the "end" so that it wouldn't delete or write over my data. And this is where i am really lost; when booting into Ubuntu i am able to use it perfectly fine, got on internet, etc...but i have NO CLUE as to how i can view files that were previously on the drive (all of my data that i had prior to install). How can I mount/be able to view the other partition so that i can have access to my data? Thank you ahead of time! I REALLY appreciate any help or advice! ~Preston

    Read the article

  • Safe zone implementation in Asteroids

    - by Moaz
    I would like to implement a safe zone for asteroids so that when the ship gets destroyed, it shouldn't be there unless it is safe from other asteroids. I tried to check the distance between each asteroid and the ship, and if it is above threshold, it sets a flag to the ship that's a safe zone, but sometimes it work and sometimes it doesn't. What am I doing wrong? Here's my code: for (list<Asteroid>::iterator itr_astroid = asteroids.begin(); itr_astroid!=asteroids.end(); ) { if(currentShip.m_state == Ship::Ship_Dead) { float distance = itr_astroid->getCenter().distance(Vec2f(getWindowWidth()/2,getWindowHeight()/2)); if( distance>200) { currentShip.m_saveField = true; break; } else { currentShip.m_saveField = false; itr_astroid++; } } else { itr_astroid++; } } At ship's death: if(m_state == Ship_Dead && m_saveField==true) { --m_lifeSpan; } if(m_lifeSpan<=0 && m_saveField == true) { m_state = Ship_Alive; m_Vel = Vec2f(0,0); m_Pos.x = app::getWindowWidth()/2; m_Pos.y = app::getWindowHeight()/2; m_lifeSpan = 100; }

    Read the article

  • Recent update killed unity 3d launcher

    - by Steve
    I am scratching my head on this one, a lot of things are still new to me. I updated 126 packages just now through the update manager, and upon reboot everything works fine except the unity launcher. It's just a dark space. The dash still works, as does the top panel and docky. When I try: unity --replace I end up with this and then an indefinite hang: (compiz:3689): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed WARN 2012-09-23 02:18:29 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubiquity-gtkui.desktop' WARN 2012-09-23 02:18:30 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubuntuone-installer.desktop' ERROR 2012-09-23 02:18:30 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-writer.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-calc.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-impress.desktop' is using a deprecated format for its actions that will be dropped soon. Setting Update "main_menu_key" Setting Update "run_key" Unfortunately I cannot make heads or tails of this. Anyone, please help?

    Read the article

  • Sharing business logic between server-side and client-side of web application?

    - by thoughtpunch
    Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow "write-once, execute many", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?

    Read the article

  • Payroll Customers Must Apply Mandatory Patches to Maintain Your Supportability

    - by DanaD
    The HRMS Suite of products has minimum required Rollup Patch (RUP) levels as well as additional mandatory patches that our customers must apply to ensure they are in compliance for support.  Without these patches, customers risk not being able to apply any fixes for issues they encounter as these RUPs and mandatory patches are the minimum patch level expected by Oracle Support and Oracle Development.  Core Payroll and International Payroll customers must apply the yearly Rollup Patch within 12 months of its issue. Legislative Payroll customers have additional requirements for the Rollup Patch, as the RUP generally is a pre-requisite for the next Year End/Fourth Quarter/Year Begin payroll processing supported by Oracle. These minimum RUP patches and other mandatory patches for your product or legislation are created with the following goals in mind: Compliance: Manage the people in your organization within the requirements of a specific country. Supportability: Ensure you are on a common code base so that if problems are identified, patches can be readily provided to you. Reliability: Reliable code with multiple customer downloads and comprehensive testing. For the listing of Mandatory Rollup Patches for Oracle Payroll please view: Doc ID 295406.1: Mandatory Family Pack/Rollup Patch (RUP) Levels for Oracle Payroll. For the listing of Mandatory Patches for the HRMS Suite please view: Doc ID 1160507.1: Oracle E-Business Suite HCM Information Center – Consolidated HRMS Mandatory Patch List. For information on the latest Rollup Patches (RUPs) for the HRMS Suite please view: Doc ID 135266.1: Oracle HRMS Product Family – Release 11i & 12 Information.

    Read the article

  • Checking preconditions or not

    - by Robert Dailey
    I've been wanting to find a solid answer to the question of whether or not to have runtime checks to validate input for the purposes of ensuring a client has stuck to their end of the agreement in design by contract. For example, consider a simple class constructor: class Foo { public: Foo( BarHandle bar ) { FooHandle handle = GetFooHandle( bar ); if( handle == NULL ) { throw std::exception( "invalid FooHandle" ); } } }; I would argue in this case that a user should not attempt to construct a Foo without a valid BarHandle. It doesn't seem right to verify that bar is valid inside of Foo's constructor. If I simply document that Foo's constructor requires a valid BarHandle, isn't that enough? Is this a proper way to enforce my precondition in design by contract? So far, everything I've read has mixed opinions on this. It seems like 50% of people would say to verify that bar is valid, the other 50% would say that I shouldn't do it, for example consider a case where the user verifies their BarHandle is correct, but a second (and unnecessary) check is also being done inside of Foo's constructor.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • What Counts for a DBA: Humility

    - by drsql
    In football (the American sort, naturally,) there are a select group of players who really hope to never have their names called during the game. They are members of the offensive line, and their job is to protect other players so they can deliver the ball to the goal to score points. When you do hear their name called, it is usually because they made a mistake and the player that they were supposed to protect ended up flat on his back admiring the clouds in the sky instead of advancing towards the goal to scoring point. Even on the rare occasion their name is called for a good reason, it is usually because they were making up for a teammate who had made a mistake and they covered up for them. The role of offensive lineman is a very good analogy for the role of the admin DBA. As a DBA, you are called on to be barely visible and rarely heard, protecting the company data assets tenaciously, even though the enemies to our craft surround us on all sides:. Developers: Cries of ‘foul!’ often ensue when the DBA says that they want data integrity to be stringently enforced and that documentation is needed so they can support systems, mostly because every error occurrence in the enterprise will be initially blamed on the database and fall to the DBA to troubleshoot. Insisting too loudly may bring those cries of ‘foul’ that somewhat remind you of when your 2 year old daughter didn't want to go to bed. The result of this petulance is that the next "enemy" gets involved. Managers: The concerns that motivate DBAs to argue will not excite the kind of manager who gets his technical knowledge from a glossy magazine filled with buzzwords, charts, and pretty pictures. However, the other programmers in the organization will tickle the buzzword void with a stream of new-sounding ideas and technologies constantly, along with warnings that if we did care about data integrity and document things, the budget would explode! In contrast, the arguments for integrity of data and supportability tend to be about as exciting as watching grass grow, and far too many manager types seem to prefer to smoke it than watch it. Packaged Applications: The DBA is rarely given a chance to review a new application that is being demonstrated for the enterprise, and rarer still is the DBA that gets a veto of an application because the database it uses has clearly been created by an architect that won't read a data modeling book because he is already married. More often than not this leads to hours of work for the DBA trying to performance-tune a database with a menagerie of rules that must be followed to stay within the  application support agreement, such as no changing indexes on a third party schema even though there are 10 billion rows instead of the 10 thousand when the system was last optimized. Hardware Failures: Physical disks, networking devices, memory, and backup devices all come with a measure known as ‘mean time before failure’ and it is never listed in centuries or eons. More like years, and the term ‘mean’ indicates that half of the devices are expected to fail before that, which by my calendar means any hour of any day that it wants to fail it will. But the DBA sucks it up and does the task at hand with a humility that makes them nearly invisible to all but the most observant person in the organization. The best DBAs I know are so proactive in their relentless pursuit of perfection that they detect many of the bugs (which they seldom caused) in the system well before they become a problem. In the end the DBA gets noticed for one of same two reasons as the offensive lineman. You make a mistake, like dropping a critical production database that had never been backed up; or when a system crashes for any reason whatsoever and they are on the spot with troubleshooting and system restoration plans that have been well thought out, tested, and tested again. Not because there is any glory in it, but because it is what they do.   Note: The characteristics of the professions referred to in this blog are meant to be overstated stereotypes for humorous effect, and even some DBAs aren't quite this perfect. If you are reading this far and haven’t hand written a 10 page flaming comment about how you are a _______ and you aren’t like this, that is awesome. Not every situation applies to everyone, but if you have never worked with a bad packaged app, a magazine trained manager, programmers that aren’t team players, or hardware that occasionally failed, relax and go have a unicorn sandwich before you wake up.

    Read the article

  • Cleaning Up Online Games with Positive Enforcement

    - by Jason Fitzpatrick
    Anyone who has played online multiplayer games, especially those focused on combat, can attest to how caustic other players can be. League of Legends creators are fighting that, rather successfully, with a positive-reinforcement honor system. The Mary Sue reports: Here’s the background: Six months ago, Riot established Team Player Behavior — affectionately called Team PB&J — a group of experts in psychology, neuroscience, and statistics (already, I am impressed). At the helm is Jeffrey Lin, better known as Dr. Lyte, Riot’s lead designer of social systems. As quoted in a recent article at Polygon: We want to show other companies and other games that it is possible to tackle player behavior, and with certain systems and game design tools, we can shape players to be more positive. Which brings us to the Honor system. Honor is a way for players to reward each other for good behavior. This is divvied up into four categories: Friendly, Helpful, Teamwork, and Honorable Opponent. At the end of a match, players can hand out points to those they deem worthy. These points are reflected on players’ profiles, but do not result in any in-game bonuses or rewards (though this may change in the future). All Honor does is show that you played nicely. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Customer Experience in the Year Ahead

    - by Christina McKeon
    With 2012 coming to an end soon, we find ourselves reflecting on the year behind us and the year ahead. Now is a good time for reflection on your customer experience initiatives to see how far you have come and where you need to go. Looking back on your customer experience efforts this year, were you able to accomplish the following? Customer journey mapping Align processes across the entire customer lifecyle (buying and owning) Connect all functional areas to the same customer data Deliver consistent and personal experiences across all customer touchpoints Make it easy and rewarding to be your customer Hire and develop talent that drives better customer experiences Tie key performance indicators (KPIs) to each of your customer experience objectives This is by no means a complete checklist for your customer experience strategy, but it does help you determine if you have moved in the right direction for delivering great customer experiences. If you are just getting started with customer experience planning or were not able to get to everything on your list this year, consider focusing on customer journey mapping in 2013. This exercise really helps your organization put your customer in the center and understand how everything you do affects that customer. At Oracle, we see organizations in various stages of customer experience maturity all learn a lot when they go through journey mapping. Companies just starting out with customer experience get a complete understanding of what it is like to be a customer and how everything they do affects that customer. And, organizations that are further along with customer experience often find journey mapping helps provide perspective when re-visiting their customer experience strategy. Happy holidays and best wishes for delivering great customer journeys in 2013!

    Read the article

  • The MsC gray zone: How to deal with the "too unexperienced on engineering/too under-qualified for research" situation?

    - by Hunter2
    Last year I've got a MsC degree on CS. On the beginning of the MsC course, I was keen on moving on with research and go for a PhD. However, as the months passed, I started to feel the urge to write software that people would, well, actually use. The programming bug had bitten me, again. So, I decided that before deciding on getting a PhD degree, I would spend some time on the "real world", working as a software developer. Sadly, most companies here in Brazil are "services" companies that seem to be stuck on the 80s when it comes to software development. I have to fend off pushy managers, less-than-competent coworkers and outrageous software requirements (why does everyone seem to need a 50k Oracle license and a behemoth Websphere AS for their CRUD applications?) on a daily basis, and even though I still love software development, the situation is starting to touch a nerve. And, mind you, I'm already lucky for getting a job at a place that isn't a plain software sweatshop. Sure, there are better places around here or I could always try my luck abroad, but then I hit the proverbial brick wall: Sorry, you're too unexperienced as a developer and too under-qualified as a researcher I've already heard this, and variations of that, multiple times. Research position recruiters look for die-hard, publication-ridden, rockstar PhDs, while development position recruiters look for die-hard, experience-ridden, rockstar programmers. To most, my MsC degree seems like a minor bump on my CV (and an outright waste of time for some). Applying for abroad positions is even harder, since the employer would have to deal of the hassle of a VISA process, which I understand that, sometimes, is too much. Now I'm feeling I've reached a dead-end. I'm certain that development (and not research) is my thing, so should I just dismiss my MsC (or play it as a "trump card") and play the "big fish on a small pond" role while I gather some experience and contribute on some open-source projects as a plus? Is there a better way to handle this?

    Read the article

  • Boot ubuntu 12.04 in 3D - nomodeset quiet splash install

    - by rahi
    I would like to enable 3D in Ubuntu 12.04. I recently tried to install ubuntu on a new computer. When the installation was complete and I rebooted the machine, I could only see a blank screen. After some searching, I followed this tutorial which instructed me to boot with "nomodeset" enabled. I choose this on the USB I was installing ubuntu 12.04 from. Fortunately, the ubuntu installation on the new computer was successful. When I tried to change the size of the unity launcher icons, I did not see that option (as I see on my other computer running ubuntu 12.04). I tried installing MyUnity and it told me that the computer I had just installed 12.04 to was running in 2D. To my knowledge. all the software is up to date (as I ran the Software Updater). In addition, when I look for Additional Drivers, I see a message that says "No proprietary drivers are in use on this system". When I look under System Details Graphics, I see the Driver as "VESA:Intel Sandybridge/Ivybridge Graphics. When I hold the shift key on when my machine boots up, and type "e" on the Grub menu, I see the following towards the end, "nomodeset quiet splash $vt_handoff". Does this have anything to do with the plain 2D ubuntu 12.04 experience? Again, what I'd like to do now is get the 3D experience on my new machine running 12.04. Please let me know if you need more information.

    Read the article

  • Why do programmers write closed source applications and then make them free? [closed]

    - by Ken
    As an entrepreneur/programmer who makes a good living from writing and selling software, I'm dumbfounded as to why developers write applications and then put them up on the Internet for free. You've found yourself in one of the most lucrative fields in the world. A business with 99% profit margin, where you have no physical product but can name your price; a business where you can ship a buggy product and the customer will still buy it. Occasionally some of our software will get a free competitor, and I think, this guy is crazy. He could be making a good living off of this but instead chose to make it free. Do you not like giant piles of money? Are you not confident that people would pay for it? Are you afraid of having to support it? It's bad for the business of programming because now customers expect to be able to find a free solution to every problem. (I see tweets like "is there any good FREE software for XYZ? or do I need to pay $20 for that".) It's also bad for customers because the free solutions eventually break (because of a new OS or what have you) and since it's free, the developer has no reason to fix it. Customers end up with free but stale software that no longer works and never gets updated. Customer cries. Developer still working day job cries in their cubicle. What gives? PS: I'm not looking to start an open-source/software should be free kind of debate. I'm talking about when developers make a closed source application and make it free.

    Read the article

  • Best practices for App Idea ownership and shares

    - by JOG
    I am developing apps on my sparetime. I am the sole developer, and two non-programmer friends of mine provide vision, content, algorithms and ideas. We always agree happily on all the features, todos and prioritizations. But naturally, coding it is the biggest part. When selling, we agree on splitting profit equally, that is 33% each. But version 1.0 naturally does not sell much. And I go on to try to make the app more viral. This includes tons of stuff where the others are of little help. Examples: Adding support for sharing, facebook connect, gameifying, letting users add content, home page, support, maintenance, server services to make it easier for to update content. The list is long. Suddenly I will be doing 100% of a lot of work but only "own" a third of the income. My friends may either "fade out" of the project after 1.0, or they continue to contribute, but with less value and I would rather exchange them for more programmers or graphic designers. The effort they made to version 1.0 is worth a lot to the app and I realize I would have never done it without them. But I am doing all the work in the end. It is hard to negotiate about splitting 90, 5, 5 instead of 33% each, because the idea is still theirs. How to solve this? What are the best practices to regard the ownership of the app? What kind of agreements could I make that make it beneficial and motivational for me to continue developing the app?

    Read the article

  • AdvanceTimePolicy and Point Event Streams In StreamInsight.

    There are a number of ways to issues CTIs (Current Time Increments) into your StreamInsight streams but a quite useful way is to do it declaratively on your source factory like this public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(InputConfig configInfo, EventShape eventShape) {     return new AdapterAdvanceTimeSettings(         new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)),         AdvanceTimePolicy.Adjust); } This will issue a CTI after every event and allows no delay (for delayed events) by stamping the CTI with the timestamp of the last event minus 1 tick. The very last statement "AdvanceTimePolicy.Adjust" tells the adapter what to do with events that violate the policy (arrive late).  From BOL "Events that violate the inserted CTI are moved in time if their lifetime overlaps with the CTI timestamp. That is, the start timestamp of the events is set to the most recent CTI timestamp, which renders those events valid. If both start and end time of an event fall before the CTI timestamp, then the event is dropped." This means that if you are using this method of inserting CTIs for a Point event stream and have specified "AdvanceTimePolicy.Adjust" for the violation policy, this setting will be ignored and instead it will use "AdvanceTimePolicy.Drop" because a Point event can never straddle a CTI.

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Game Review: Monument Valley

    Once again, it was a tweet that caught my attention... and the official description on the Play Store sounds good, too. "In Monument Valley you will manipulate impossible architecture and guide a silent princess through a stunningly beautiful world. Monument Valley is a surreal exploration through fantastical architecture and impossible geometry. Guide the silent princess Ida through mysterious monuments, uncovering hidden paths, unfolding optical illusions and outsmarting the enigmatic Crow People." So, let's check it out. What an interesting puzzle game Once again, I left some review on the Play Store: "Beautiful but short distraction Woohoo, what a great story behind the game. Using optical illusions and impossible geometries in this fantastic adventure of the silent princess just puts all the pieces perfectly together. Walking the amazing paths in the various levels and solving the riddles gives some decent hours of distraction but in the end you might have the urge to do more..." I can't remember exactly when and who tweeted about the game but honestly it caught my attention based on the simplicity of the design and the aspect that it seems to be an isometric design. The game relies heavily on optical illusions in order to guide to the silent princess Ida through her illusory adventure of impossible architecture and forgiveness. The game is set like a clockwork and you are turning, flipping and switching elements on the paths between the doors. Unfortunately, there aren't many levels and the game play lasted only some hours. Maybe there are more astonishing looking realms and interesting gimmicks in future versions. Play Store: Monument Valley Also, check out the latest game updates on the official web site of ustwo BTW, the game is also available on the Apple App Store and on Amazon Store for the Kindle Fire.

    Read the article

  • Languages on a resume: Is it better to put "C/C++" or "C, C++"?

    - by Kevin
    I'm graduating in a couple of weeks, and my resume (as expected) lists the languages that I've had experience with. Previously I've put "C/C++", however back then I didn't have that much experience with these two languages as I do now. Now that I've formally learned these two languages, it has become evident to me (and anyone who really knows these languages) that they are similar, and completely disimilar at the same time. Sure, most C code is compilable C++ code, but syntax and incorporation of library functions is pretty much where these similarities end. In most non-trivial problems, chances are that the desirable C++ solution will be different from the desirable C solution. My question: Will recruiters take note or care about whether you put "C/C++" as opposed to "C, C++"? Will they assume a lack of knowledge of the workings of either because of the inclusion of the first form, or perhaps see the inclusion of the second form as a potential "resume beefer" (listing them as 2 languages, instead of "one")? Furthermore, for jobs that you've applied to that were particularly interested in these two langauges, did the interview process include questions about the differences between C programming and C++ programming (so, about actual programming techniques, not only the extra paradigms in the latter)?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • Can I install new version of Ubuntu in spair RAIDed partition with unetbootin

    - by artfulrobot
    I have Ubuntu 11.04 running on my home desktop which has 2 hard drives mirrored by RAID. The drives are partitioned with a big data partition, a swap partition and a couple of 20Gb partitions for OSes, one is 11.04 which is in use, and the other is kept spare for installing a later version. Which is what I'd like to do now. The idea of a 2nd partition for new OS is that I can try it, and if it's problematic, I can boot back into the original one - the machine is shared with others, so I need it to stay available! I have had horrible problems with software RAID after using a Live USB stick - basically it messes up the internal numbering of the RAID drives or something, anyway, the result is you can't boot after using it :-( and have to spend ages re-assembling the arrays, trying to remember grub commands etc etc. Quite a shocker when you consider booting from a Live USB is supposed not to affect the existing system. As I'm installing in a RAIDed disc, I would typically use the Alternative install (sad to hear that this is going to be dropped in future). However, I think I might be able to use unetbootin to trick the system into working on top of the existing system that understands RAID, with the normal ISO? If unetbootin loads from drives that are already understood to be RAIDED, then presumably it will only see md0... instead of sda, sdb... and as long as I don't need to repartition (I don't) it should be fine, right? Or is that just plain foolishness? Please tell me before I end up with a dead system (again!)

    Read the article

  • Data Binding to Attached Properties

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/14/data-binding-to-attached-properties.aspx When I was working on my C#/XAML game framework, I discovered I wanted to try to data bind my sprites to background objects. That way, I could update my objects and the draw functionality would take care of the work for me. After a little experimenting and web searching, it appeared this concept was an impossible dream. Of course, when has that ever stopped me? In my typical way, I started to massively dive down the rabbit hole. I created a sprite on a canvas, and I bound it to a background object. <Canvas Name="GameField" Background="Black"> <Image Name="PlayerStrite" Source="Assets/Ship.png" Width="50" Height="50" Canvas.Left="{Binding X}" Canvas.Top="{Binding Y}"/> </Canvas> Now, we wire the UI item to the background item. public MainPage() { this.InitializeComponent(); this.Loaded += StartGame; }   void StartGame( object sender, RoutedEventArgs e ) { BindingPlayer _Player = new BindingPlayer(); _Player.X = Window.Current.Bounds.Height - PlayerSprite.Height; _Player.X = ( Window.Current.Bounds.Width - PlayerSprite.Width ) / 2.0; } Of course, now we need to actually have our background object. public class BindingPlayer : INotifyPropertyChanged { private double m_X; public double X { get { return m_X; } set { m_X = value; NotifyPropertyChanged(); } }   private double m_Y; public double Y { get { return m_Y; } set { m_Y = value; NotifyPropertyChanged(); } }   public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } } I fired this baby up, and my sprite was correctly positioned on the screen. Maybe the sky wasn't falling after all. Wouldn't it be great if that was the case? I created some code to allow me to move the sprite, but nothing happened. This seems odd. So, I start debugging the application and stepping through code. Everything appears to be working. Time to dig a little deeper. After much profanity was spewed, I stumbled upon a breakthrough. The code only looked like it was working. What was really happening is that there was an exception being thrown in the background thread that I never saw. Apparently, the key call was the one to PropertyChanged. If PropertyChanged is not called on the UI thread, the UI thread ignores the call. Actually, it throws an exception and the background thread silently crashes. Of course, you'll never see this unless you're looking REALLY carefully. This seemed to be a simple problem. I just need to marshal this to the UI thread. Unfortunately, this object has no knowledge of this mythical UI Thread in which we speak. So, I had to pull the UI Thread out of thin air. Let's change our PropertyChanged call to look this. public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync( Windows.UI.Core.CoreDispatcherPriority.Normal, new Windows.UI.Core.DispatchedHandler( () => { PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } ) ); } Now, we raised our notification on the UI thread. Everything is fine, people are happy, and the world moves on. You may have noticed that I didn't await my call to the dispatcher. This was intentional. If I am trying to update a slew of sprites, I don't want thread being hung while I wait my turn. Thus, I send the message and move on. It is worth nothing that this is NOT the most efficient way to do this for game programming. We'll get to that in another blog post. However, it is perfectly acceptable for a business app that is running a background task that would like to notify the UI thread of progress on a periodic basis. It is worth noting that this code was written for a Windows Store App. You can do the same thing with WP8 and WPF. The call to the marshaler changes, but it is the same idea.

    Read the article

< Previous Page | 886 887 888 889 890 891 892 893 894 895 896 897  | Next Page >