Search Results

Search found 6950 results on 278 pages for 'moving average'.

Page 123/278 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • How to explain my 5 burnt-out years off to a new employer?

    - by user17332
    Five years ago, I lost my ability to concentrate long-term, and therefore ability to code with professional efficiency. I know why it happened, I understood how it happened, and on top of being able to re-create my calm and thus relaxed focus, I overcame the original (rooted in childhood) reason why my mind tilted on the overall situation back then; My understanding isn't rooted in words that a psychologist told me, I actually grokked them first-hand. I'm pretty much confident to be able to churn out productivity, possibly even more so than pre-burnout. I also never lost my interest in code nor did I stray from trying to get my abilities back; I kept my knowledge up to date (I could always relatively painlessly learn things coding-related, just not apply them) and thus can say that I'm a better developer than before, even if my average LOC-count over those years is abysmally low. On the other hand, now I have a biography that includes more time on the dole than in a job. What would convince you, as an employer, to give my application a chance? I don't believe I should just keep the whole topic out of it.

    Read the article

  • "Give with Bing" - Help raise money for Sports relief while searching for whatever you want

    - by Testas
    While Sport Relief drives fundraising by challenging people to do physical activities such as running a mile, we’re introducing the ‘Bing Search Mile,’ which gives people the ability to search using Bing and raise money for charity. For every 10 searches made, Bing.com will donate 5p to Sport Relief 2010, enabling you, and your friends and family, to raise money just by searching with Bing until the end of March. With the average mile taking about 10 minutes to run, in the same time, you can make up to 150 searches online - that’s 75p raised for a good cause per ‘search mile’. And while you’re at it,  why not step it up a gear and aim to complete a ‘Search Mile’ each day or even a ‘Search Marathon’ over the 5 week campaign with your colleagues, friends and family? How to get involved: 1.      Visit GiveWithBing.com and download the Official Sport Relief Bing Counter. Once downloaded, the Sport Relief counter will count all the searches you do on Bing from that point on.  2.      Now that you’re registered (and signed in), invite your friends, family, colleagues or classmates to join in the fundraising with you – GiveWithBing.com automatically generates an email explaining how it works for you to send them – the more people who search with you, the more money you raise. People can also register a school 3.      Run your ‘search mile’ every day and watch how your searches turn into life-changing cash for charity, with every 10 searches equalling 5p for Sport Relief. You can check your progress by visiting your individual page (more info here).  This is such a positive initiative and I challenge everyone in the UK to invite their key contacts to be part of Give with Bing.   Chris

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard By Nitesh Jain

    - by JuergenKress
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes. Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals. Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint. It also shows the detailed view of a specific Endpoint. The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message. For more information please visit Nitesh Jain blog SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Suite,SOA heathcare,soa health,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Can I save & store a user's submission in a way that proves that the data has not been altered, and that the timestamp is accurate?

    - by jt0dd
    There are many situations where the validity of the timestamp attached to a certain post (submission of information) might be invaluable for the post owner's legal usage. I'm not looking for a service to achieve this, as requested in this great question, but rather a method for the achievement of such a service. For the legal (in most any law system) authentication of text content and its submission time, the owner of the content would need to prove: that the timestamp itself has not been altered and was accurate to begin with. that the text content linked to the timestamp had not been altered I'd like to know how to achieve this via programming (not a language-specific solution, but rather the methodology behind the solution). Can a timestamp be validated to being accurate to the time that the content was really submitted? Can data be stored in a form that it can be read, but not written to, in a proven way? In other words, can I save & store a user's submission in a way that proves that the data has not been altered, and that the timestamp is accurate? I can't think of any programming method that would make this possible, but I am not the most experienced programmer out there. Based on MidnightLightning's answer to the question I cited, this sort of thing is being done. Clarification: I'm looking for a method (hashing, encryption, etc) that would allow an average guy like me to achieve the desired effect through programming. I'm interested in this subject for the purpose of Defensive Publication. I'd like to learn a method that allows an every-day programmer to pick up his computer, write a program, pass information through it, and say: I created this text at this moment in time, and I can prove it. This means the information should be protected from the programmer who writes the code as well. Perhaps a 3rd party API would be required. I'm ok with that.

    Read the article

  • The Cloud is STILL too slow!

    - by harry.foxwell(at)oracle.com
    If you've been in the computing industry sufficiently long enough to remember dialup modems and other "ancient" technologies, you might be tempted to marvel at today's wonderfully powerful multicore PCs, ginormous disks, and blazingly fast networks.  Wow, you're in Internet Nirvana, right!  Well, no, not by a long shot.Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow.Already we are seeing cloud-enabled consumer devices that are stressing even the most advanced public network services.  Like the iPad and its competitors, ever more powerful smart-phones, and an imminent hoard of special purpose gadgets such as the proposed "cloud camera" (see http://gdgt.com/discuss/it-time-cloud-camera-found-out-cnr/ ).And at the same time that the number and type of cloud services are growing, user tolerance for even the slightest of download delays is rapidly decreasing.  Ten years ago Web developers followed the "8-Second Rule", (average time a typical Web user would tolerate for a page to download and render).  Not anymore; now it's less than 3 seconds, and only a bit longer for mobile devices (see http://www.technologyreview.com/files/54902/GoogleSpeed_charts.pdf).  How spoiled we've become!Google, among others, recognizes this problem and is working to encourage the development of a faster Web (see http://www.technologyreview.com/web/32338/). They, along with their competitors and ISPs, will have to encourage and support significantly better Web performance in order to provide the types of services envisioned for the Cloud.  How will they do this? Through the development of faster components, better use of caching technologies, and the really tough one - exploiting parallelism. Not that parallel technologies like multicore processors are hard to build...we already have them.  It's just that we're not that good yet at using them effectively.  And if we don't get better, users will abandon cloud-based services...in less than 3 seconds.

    Read the article

  • Announcing Monthly Silverlight Giveaways on MichaelCrump.Net

    - by mbcrump
    I've been working with different companies to give away a set of Silverlight controls to my blog readers for the past few months. I’ve decided to setup this page and a friendly URL for everyone to keep track of my monthly giveaway. The URL to bookmark is: http://giveaways.michaelcrump.net. My main goal for giving away the controls is: I love for the Silverlight Community and giving away nice controls always sparks interest in Silverlight. To spread the word about a company and let the user decide if it meet their particular situation. I am a “control” junkie, it helps me solve my own personal business problems since I know what is out there. Provide some additional hits for my blog. Below is a grid that I will update monthly with the current giveaway. Feel free to bookmark this post and visit it monthly. Month Name Giveaway Description Blog Post Detailing Controls October 2010 Telerik Silverlight Controls –RadControls http://michaelcrump.net/archive/2010/10/15/win-telerik-radcontrols-for-silverlight-799-value.aspx November 2010 Mindscape Mindscape Mega-Pack http://michaelcrump.net/archive/2010/11/11/mindscape-silverlight-controls--free-mega-pack-contest.aspx December 2010 Infragistics Silverlight Controls + Silverlight Data Visualization http://michaelcrump.net/mbcrump/archive/2010/12/15/win-a-set-of-infragistics-silverlight-controls-with-data-visualization.aspx January 2011 Cellbi Cellbi Silverlight Controls http://michaelcrump.net/mbcrump/archive/2011/01/05/cellbi-silverlight-controls-giveaway-5-license-to-give-away.aspx February 2011 *BOOKED*     To Third Party Silverlight Companies: If you create any type of Silverlight control/application and would like to feature it on my blog then you may contact me at michael[at]michaelcrump[dot]net. Giving away controls has proven to be beneficial for both parties as I have around 4k twitter followers and average around 1000 page views a day. Contact me today and give back to the Silverlight Community.  Subscribe to my feed

    Read the article

  • Oracle SOA Suite customer panel: Successful Application Integration & SOA Projects

    - by Simone Geib
    At the recent SOA Suite customer panel, Roger Brown from UNS Energy, Fabio Ravagni from Cencosud and Paras Jain from Cisco discussed their recent SOA Suite implementations, business drivers and challenges, architecture and lessons learned. Roger started by describing how UNS redesigned their internet portal to improve their customer experience and reduce manual steps in their business processes. Through the use of Oracle Service Bus, Oracle BPEL Process Manager and Oracle Business Activity Monitoring, they provided more self-service functionality, automated their business processes and increased the use of their web site by 12.98% for number of visits and 33.58% for average visit duration. The screenshot below shows the UNS architecture: > Next Fabio described the challenges Cencosud faced through continuous expansion of their business, different standards and levels of expertise and large volumes of information. By introducing Oracle SOA Suite, Oracle Data Integrator and Oracle Enterprise Repository, and with the help of Oracle Consulting, they significantly simplified their integration model, reduced their maintenance effort and increased their integration governance. The picture below shows the implemented solution with so far more than 400 services in production and more than 20 ongoing projects, which will make use of the new integration platform. > Last, but not least, Paras discussed the challenges the Webex division of Cisco faced with a highly manual service fulfillment process, multiple data sources and the resulting large room for errror and delay in customer time-to-service. Through a redesign of their order fulfillment process and the introduction of Oracle SOA Suite (see below), they significantly improved their SLAs, eliminated duplicate orders, provided higher visibility into the order process and aligned business and IT. For more information about Oracle OpenWorld SOA & BPM Session, please see the Focus on SOA and BPM document

    Read the article

  • The Strange History of the Honeywell Kitchen Computer

    - by Jason Fitzpatrick
    In 1969 the Honeywell corporation released a $10,000 kitchen computer that weighed 100 pounds, was as big as a table, and required advanced programming skills to use. Shockingly, they failed to sell a single one. Read on to be dumbfounded by how ahead of (and out of touch with) its time the Honeywell Kitchen Computer was. Wired delves into the history of the device, including how difficult it was to use: Now try to imagine all that in late 1960s kitchen. A full H316 system wouldn’t have fit in most kitchens, says design historian Paul Atkinson of Britain’s Sheffield Halam University. Plus, it would have looked entirely out of place. The thought that an average person, like a housewife, could have used it to streamline chores like cooking or bookkeeping was ridiculous, even if she aced the two-week programming course included in the $10,600 price tag. If the lady of the house wanted to build her family’s dinner around broccoli, she’d have to code in the green veggie as 0001101000. The kitchen computer would then suggest foods to pair with broccoli from its database by “speaking” its recommendations as a series of flashing lights. Think of a primitive version of KITT, without the sexy voice. Hit up the link below for the full article. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Is there a measure of code rot?

    - by DarenW
    I'm dealing, again, with a messy C++ application, tons of classes with confusing names, objects have pointers into each other and all over, longwinded Boost and STL data types, etc. (Pause and consider your favorite terror of messy legacy code. We probably have it.) The phrase "code rot" oft comes to mind when I work on this project. Is there a quantitative way to measure code rot? I wouldn't expect anything highly meaningful or scientific, since no other measure of code productivity or quality is so fine. I'm not looking for a mere opposite of measures of code quality, but specifically a measure of how many bad things happened after a series of maintenance software "engineers" have had turns hacking at the code. A general measure applying to any language, or many languages, would be great. If there's no such thing, at least for C++, which is a better than average language for creating messes. Maybe something involving a measure of topology of how objects connect during runtime, a count of chunks of commented out code, how mane files a typical variable's usage is scattered over, I don't know... but surely now, a decade into the 21st Century, someone has attempted to define some sort of rot measure. It would be especially interesting to automate a series of svn checkouts, measure the "rottenosity" of each, and plot the decay over time.

    Read the article

  • Compare Your Internet Cost and Speed to Global Averages [Infographic]

    - by ETC
    Internet pricing and speed varies wildly across the world. The US, for instance, currently ranks 15th in speed but enjoys reasonably priced internet access. How reasonably priced? If you’re a US citizen you likely have an average internet access speed of 4.8 mbps and you pay a little over $3 per mbps. If you’re in Sweden, however, you likely have an 18 mbps connection and you pay a scant 63 cents per mpbs. The real envy of the internet speed Olympics by far is Japan with a mighty 61 mbps at a mere 27 cents per mbps. Hit up the link below for the full infographic (or use this local mirror if you need to dodge a firewall), then sound off in the comments with how you compare on the international scale. Internet Speeds and Costs Around the World [via Daily Infographic] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic] Orbital Battle for Terra Wallpaper WizMouse Enables Mouse Over Scrolling on Any Window

    Read the article

  • How much system and business analysis should a programmer be reasonably expected to do?

    - by Rahul
    In most places I have worked for, there were no formal System or Business Analysts and the programmers were expected to perform both the roles. One had to understand all the subsystems and their interdependencies inside out. Further, one was also supposed to have a thorough knowledge of the business logic of the applications and interact directly with the users to gather requirements, answer their queries etc. In my current job, for ex, I spend about 70% time doing system analysis and only 30% time programming. I consider myself a good programmer but struggle with developing a good understanding of the business rules of a complex application. Often, this creates a handicap because while I can write efficient algorithms and thread-safe code, I lose out to guys who may be average programmers but have a much better understanding of the business processes. So I want to know - How much business and systems knowledge should a programmer have ? - How does one go about getting this knowledge in an immensely complex software system (e.g. trading applications) with several interdependent business processes but poorly documented business rules.

    Read the article

  • As a C# developer, would you learn Java to develop for Android or use MonoDroid instead?

    - by Dan Tao
    I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically all my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are very close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it—when it's so close to what you already know anyway—rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java?

    Read the article

  • Have programmers at your work not taken up or been averse to an offer of a second monitor?

    - by Chris Knight
    I'm putting together a business case for the developers in my company to get a second monitor. After my own experiences and research, this seems a no-brainer to me in terms of increasing productivity and morale/happiness. One question which has niggled me is if I should be pushing to get all developers onto a second monitor or let folk opt-in (i.e. they get one if they want one). Thoughts on this are welcome, but my specific question relates to a snippet on this site: But when the IT manager at Thibeault's company asked other employees if they wanted dual monitors last year, few jumped at the offer. Blinded by my own pre-judgement, this surprised me. Has anyone else experienced this? I fully appreciate that some people prefer a single larger monitor, but my general experience of researching the web suggests that most programmers prefer a dual (or more) setup. I'm guessing this should be tempered with the thought that those developers who contribute to such discussions might not be considered your average developer who might not care one way or the other. Anyway, if you have experienced the above have you tried to sell the concept of dual monitors to the masses? If everyone just got 2 monitors regardless if they wanted it or not, were there adverse reactions or negative effects? UPDATE: The developers are on a mixture of 17", 22", or 24" single monitors. The desks should be able to accommodate dual 22" monitors as I am proposing, though this will take some getting used to I imagine.

    Read the article

  • Managing Social Relationships for the Enterprise – Part 1

    - by kellsey.ruppel
    By Reggie Bradford, Senior Vice President, Oracle  Today, Mark Hurd, President of Oracle, Thomas Kurian, Executive Vice President of Oracle and I discussed the strategic importance of how social media is impacting the enterprise and how it is changing the way customers, prospects employees and investors interact with brands worldwide.  Oracle understands that the consumer is in control and as such, brands must evolve and change to meet growing needs. In addition, according to social media thought leader and Analyst from Altimeter Group, Jeremiah Owyang, companies now average 178 corporate-owned social media accounts. When Oracle added leading social marketing, listening analytics and development tools from Vitrue, Collective Intellect and Involver to its Oracle’s Cloud Services Suite we went beyond providing a single set of tools. We developed an entire framework to include a comprehensive social relationship management suite to help companies move beyond the social enterprise and achieve the social-enabled enterprise.  The fundamental shift from transaction to engagement means that enterprises need not only a social strategy, but should also ensure that the information and data received from social initiatives flow back to marketing, sales, support and service. Doing so enables companies to deliver a proactive and compelling experience and provides analytics to turn engagement into opportunity – and ultimately that opportunity into revenue.  On September 13, 2012, I am delighted to sit down with Jeremiah to further the discussion about how enterprises are addressing social media strategies and managing content.  In addition, we will be taking your questions after the webinar via Twitter (@Oracle, @ReggieBradford, @cfinn, @jowyang). Use #oracle and #socbiz to submit questions and follow the conversation. I look forward to speaking with you and answering your questions online.  For more information about becoming a social-enabled enterprise, visit www.oracle.com/social. And don’t miss the insights of other social business thought leaders at www.oracle.com/goto/socialbusiness.

    Read the article

  • MS Bing web crawler out of control causing our site to go down

    - by akaDanPaul
    Here is a weird one that I am not sure what to do. Today our companies e-commerce site went down. I tailed the production log and saw that we were receiving a ton of request from this range of IP's 157.55.98.0/157.55.100.0. I googled around and come to find out that it is a MSN Web Crawler. So essentially MS web crawler overloaded our site causing it not to respond. Even though in our robots.txt file we have the following; Crawl-delay: 10 So what I did was just banned the IP range in iptables. But what I am not sure to do from here is how to follow up. I can't find anywhere to contact Bing about this issue, I don't want to keep those IPs blocked because I am sure eventually we will get de-indexed from Bing. And it doesn't really seem like this has happened to anyone else before. Any Suggestions? Update, My Server / Web Stats Our web server is using Nginx, Rails 3, and 5 Unicorn workers. We have 4gb of memory and 2 virtual cores. We have been running this setup for over 9 months now and never had an issue, 95% of the time our system is under very little load. On average we receive 800,000 page views a month and this never comes close to bringing / slowing down our web server. Taking a look at the logs we were receiving anywhere from 5 up to 40 request / second from this IP range. In all my years of web development I have never seen a crawler hit a website so many times. Is this new with Bing?

    Read the article

  • Case Study: Polystar Improves Telecom Networks Performance with Embedded MySQL

    - by Bertrand Matthelié
    Polystar delivers and supports systems that increase the quality, revenue and customer satisfaction of telecommunication services. Headquarted in Sweden, Polystar helps operators worldwide including Telia, Tele2, Telekom Malysia and T-Mobile to monitor their network performance and improve service levels. Challenges Deliver complete turnkey solutions to customers integrating a database ensuring high performance at scale, while being very easy to use, manage and optimize. Enable the implementation of distributed architectures including one database per server while maintaining a low Total Cost of Ownership (TCO). Avoid growing database complexity as the volume of mobile data to monitor and analyze drastically increases. Solution Evaluation of several databases and selection of MySQL based on its high performance, manageability, and low TCO. The MySQL databases implemented within the Polystar solutions handle on average 3,000 to 5,000 transactions per second. Up to 50 million records are inserted every day in each database. Typical installations include between 50 and 100 MySQL databases, up to 300 for the largest ones. Data is then periodically aggregated, with the original records being overwritten, as the need for detailed information becomes unnecessary to operators after a few weeks. The exponential growth in mobile data traffic driven by the proliferation of smartphones and usage of social media requires ever more powerful solutions to monitor, analyze and turn network data into actionable business intelligence. With MySQL, Polystar can deliver powerful, yet easy to manage, solutions to its customers. MySQL-based Polystar solutions enable operators to monitor, manage and improve the service levels of their telecom networks in over a dozen countries from a single location. The new and innovative MySQL features constantly delivered by Oracle help ensure Polystar that it will be able to meet its customer’s needs as they evolve. “MySQL has been a great embedded database choice for us. It delivers the high performance we need while remaining very easy to use, manage and tune. Power and simplicity at its best.” Mats Söderlindh, COO at Polystar.

    Read the article

  • What are the boundaries between the responsibilities of a web designer and a web developer?

    - by Beofett
    I have been hired to do functional development for several web site redesigns. The company I work for has a relatively low technical level, and the previous development of the web sites were completed by a graphic designer who is self taught as far as web development is concerned. My responsibilities have extended beyond basic development, as I have been also tasked with creating the development environment, and migrating hosting from external CMS hosting to internal servers incorporating scripting languages (I opted for PHP/MySQL). I am working with the graphic designer, and he is responsible for the creative design of the web. We are running into a bit of friction over confusion between the boundaries of our respective tasks. For example, we had some differences of opinion on navigation. I was primarily concerned with ease-of-use (the majority of our userbase are not particularly web-savvy), as well as meeting W3 WAI standards (many of our users are older, and we have a higher than average proportion of users with visual impairment). His sole concern was what looked best for the website, and I felt that the direction he was pushing for caused some functional problems. I feel color choices, images, fonts, etc. are clearly his responsibility, and my expectation was that he would simply provide me with the CSS pages and style classes and IDs to use, but some elements of page layout also seem to fall more under the realm of "usability", which to me translates as near-synonymous with "functionality". I've been tasked with selecting the tools we'll use, which include frameworks, scripting languages, database design, and some open source applications (Moodle for example, and quite probably Drupal in the future). While these tools are quite customizable, working directly with some of the interfaces is beyond his familiarity with CSS, HTML, and PHP. This limits how much direct control he has over the appearance, which has lead to some discussion about the tool choices. Is there a generally accepted dividing line between the roles of a web designer and a web developer? Does his relatively inexperienced background in web technologies influence that dividing line?

    Read the article

  • How can I create a square 10x10 grid using nested for loops in Java? [migrated]

    - by help
    I'm trying to create a 10x10 grid using for loops in Java. I'm able to create rows going up and down but not repeating. for(int i = 1; i < temperatures.length; i++) { temperatures[i] = (temperatures[i-1] + 12) / 2; //takes average of 12 and previous temp } } public void paint(Graphics g) { for(int y = 1; y < 9; y++) { g.setColor(Color.black); g.drawRect(10, 10, 10, 10); g.drawRect(10, 10, 10, 20); g.drawRect(10, 10, 10, 30); g.drawRect(10, 10, 10, 40); g.drawRect(10, 10, 10, 50); g.drawRect(10, 10, 10, 60); g.drawRect(10, 10, 10, 70); g.drawRect(10, 10, 10, 80); g.drawRect(10, 10, 10, 90); g.drawRect(10, 10, 10, 100); for(int x = 1; x < 9; x++) { g.setColor(Color.black); g.drawRect(10, 10, 10, 10); g.drawRect(10, 10, 20, 10); g.drawRect(10, 10, 30, 10); g.drawRect(10, 10, 40, 10); g.drawRect(10, 10, 50, 10); g.drawRect(10, 10, 60, 10); g.drawRect(10, 10, 70, 10); g.drawRect(10, 10, 80, 10); g.drawRect(10, 10, 90, 10); g.drawRect(10, 10, 100, 10); } } } }

    Read the article

  • How to Handle frame rates and synchronizing screen repaints

    - by David Kroukamp
    I would first off say sorry if the title is worded incorrectly. Okay now let me give the scenario I'm creating a 2 player fighting game, An average battle will include a Map (moving/still) and 2 characters (which are rendered by redrawing a varying amount of sprites one after the other). Now at the moment I have a single game loop limiting me to a set number of frames per second (using Java): Timer timer = new Timer(0, new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { long beginTime; //The time when the cycle begun long timeDiff; //The time it took for the cycle to execute int sleepTime; //ms to sleep (< 0 if we're behind) int fps = 1000 / 40; beginTime = System.nanoTime() / 1000000; //execute loop to update check collisions and draw gameLoop(); //Calculate how long did the cycle take timeDiff = System.nanoTime() / 1000000 - beginTime; //Calculate sleep time sleepTime = fps - (int) (timeDiff); if (sleepTime > 0) {//If sleepTime > 0 we're OK ((Timer)e.getSource()).setDelay(sleepTime); } } }); timer.start(); in gameLoop() characters are drawn to the screen ( a character holds an array of images which consists of their current sprites) every gameLoop() call will change the characters current sprite to the next and loop if the end is reached. But as you can imagine if a sprite is only 3 images in length than calling gameLoop() 40 times will cause the characters movement to be drawn 40/3=13 times. This causes a few minor anomilies in the sprited for some charcters So my question is how would I go about delivering a set amount of frames per second in when I have 2 characters on screen with varying amount of sprites?

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

  • How can a non-technical person learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.

    Read the article

  • Why doesn't Unity's OnCollisionEnter give me surface normals, and what's the most reliable way to get them?

    - by michael.bartnett
    Unity's on collision event gives you a Collision object that gives you some information about the collision that happened (including a list of ContactPoints with hit normals). But what you don't get is surface normals for the collider that you hit. Here's a screenshot to illustrate. The red line is from ContactPoint.normal and the blue line is from RaycastHit.normal. Is this an instance of Unity hiding information to provide a simplified API? Or do standard 3D realtime collision detection techniques just not collect this information? And for the second part of the question, what's a surefire and relatively efficient way to get a surface normal for a collision? I know that raycasting gives you surface normals, but it seems I need to do several raycasts to accomplish this for all scenarios (maybe a contact point/normal combination misses the collider on the first cast, or maybe you need to do some average of all the contact points' normals to get the best result). My current method: Back up the Collision.contacts[0].point along its hit normal Raycast down the negated hit normal for float.MaxValue, on Collision.collider If that fails, repeat steps 1 and 2 with the non-negated normal If that fails, try steps 1 to 3 with Collision.contacts[1] Repeat 4 until successful or until all contact points exhausted. Give up, return Vector3.zero. This seems to catch everything, but all those raycasts make me queasy, and I'm not sure how to test that this works for enough cases. Is there a better way?

    Read the article

  • How Can You Get More Productive In Life Sciences Sales?

    - by charles.knapp
    Only half of all doctors will meet with pharmaceutical sales reps, and that percentage continues to decrease. Furthermore, when reps are granted an opportunity to share information, the average interaction is only about a minute and a half. Concurrently, call quotas continue to increase. What does this matter? Sales reps need to spend less time on traditional planning and after-call reporting, more time making calls, and make more productive use of short presentation times. Fortunately for sales reps, Oracle offers the first life sciences CRM that is designed to double sales time and halve reporting time. In particular, our new Life Sciences Edition Offline Client is designed so that you can actually turn the screen around, so that your CRM is useful for presentations and not just reporting, whether you are connected to cloud or working offline such as in restricted clinical environments. Watch Piers Evans, Industry Strategy Director, show what this looks like in the day of a typical pharmaceutical sales representative. By use of this code snippet, I agree to the Brightcove Publisher T and C found at https://accounts.brightcove.com/en/terms-and-conditions/. -- This script tag will cause the Brightcove Players defined above it to be created as soon as the line is read by the browser. If you wish to have the player instantiated only after the rest of the HTML is processed and the page load is complete, remove the line. -- brightcove.createExperiences();

    Read the article

  • A bounce-rate attack to manipulate SEO ?

    - by Denis Volovik
    This is a question to experienced people that might help us shed some light on the issue. We noticed a very strange behavior on our site, in Google Analytics. Some dude from Finland, namely, from Kouvola city is hitting one of our pages - only one page on our site, 'bout a hundred times per day, all with an average bounce rate of 90%+... This is causing our overall bounce rate to go up by 1 to 3% per day... which is very disturbing.. since we're trying to do our best in order to keep it as low as possible. And obviously having it jumped from ~24% to 27%, just because of that crazy dude is not making us happy at all... We tried implementing a geo-targeted script in order to catch this particular visitor and deliver him a juicy message, and it seemed like it helped in the beginning, it has stopped for a day or two, but now he's back... The geo-targeted script was also logging all IP addresses for page requests originating from Finland in order to find out more details and (in order to block them on the server level, later).. but thing is, it was all mainly cable or DSL connections with various, but not constantly repeating IPs... we are all wondering what is he up to really ? I think that this page should be kept updated with ideas on how to combat this and perhaps someone could also shed light on what it might be ? What is the reason for doing this "bounce-rate attack", as I call it? There was a similar question asked on stackoverflow earlier, with no meaningful answer - here - How to stop bounce rate manipulation.

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >