Search Results

Search found 65160 results on 2607 pages for 'compile time constant'.

Page 401/2607 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Are programmers a bunch of heartless robots who are lacking of empathy? [closed]

    - by Graviton
    OK, the provocative title got your attention. My experience as a programmer and dealing with my fellow programmers is that, a programmer is also usually someone who is so consumed by his programming work, so absorbed in his algorithmic construction that he has little passion/ time left for anything else, which includes empathy for other people, love and care for the people whom he love or should love ( such as their spouses, parents, kids, colleagues etc). The better a person is in terms of his programming powers, the more defective he is in terms of love/care because both honing programming skills and loving the surrounding takes time and one has only so much time to be allocated among so many different things. Also, programming ( especially INTERESTING programming job, like, writing an AI to predict the future search trend) is a highly consuming job; it doesn't just consume you from 9 to 5, it will also consume you after 5 and practically every second of your waking hours because a good programmer can't just magically switch off his thinking hat after the office lights go off ( If you can then I don't really think you are a passionate programmer, and the prerequisite of a good programmer is passion). So, a good programmer is necessarily someone who can't love as much as others do because the very nature of the programming job prevents him from loving others as much as he wants to. Do you concur with my observation/ reasoning?

    Read the article

  • On the frontier between work and home

    - by MPelletier
    I think we've all been there: You hear of someone say "hey, wouldn't it be nice if platform X had feature Y?" You look around (on SO!), the feature really doesn't exist, even though it probably would be useful in many contexts. So it's pretty generic. Your mind wanders for a bit. "How tough would it be? Well, it'd probably be just a snippet. And an ad-hoc function. And maybe a wrapper." And boom, before you know it, you've spent a dozen hours of your free time implementing a FooFeature that's really neat and generic. The kind of code you might not even have the time to spit and shine at work, that would be a bit rushed and not so documented. So now you wonder "wouldn't this be useful to others?" And you've got your blog, maybe a CodeProject account, and your colleague who asked if FooFeature exists might, haphazardly, come accross that blog entry, had it existed before they told you. On the otherhand, the NDA agreement. It's sort of vague and general. It doesn't forbid you from coding at home, but it's clear on sharing company code, that's a big NO. But this isn't company code. Or is it? Or will it be? So, what do you do with code (that's more than just a snippet) you wrote in your off time with universality in mind but an idea that came from work, and that will most likely be used at work? Can it be published?

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • How to get OpenSSH to use ksshaskpass under KDE?

    - by Guss
    When using a GNOME desktop on Ubuntu, if I use OpenSSH client to connect to another computer (running from the gnome-terminal), I get a single graphic popup asking for my private key's pass-phrase. After that I no longer need to enter my pass-phrase as it is cached by the SSH agent. Under KDE it doesn't work like that - when I start ssh from konsole, I get a text prompt for my pass-phrase every single time, even though ssh-agent is running. If I run ssh-add from the terminal then I can enter my pass-phrase on the terminal and it will be stored by ssh-agent and I won't get any more pass-phrase prompts, while if I run ssh-add the KRunner graphical command line ("Run" dialog) then I get a graphical prompt with the same behavior. The problem is I have to remember running ssh-add every time I log in to the desktop. How can I get ssh to behave under KDE, the same as it does on GNOME - the first time the pass-phrase is needed, pop up a graphical dialog and store the pass-phrase in the agent. I've installed ksshaskpass, but that didn't change anything.

    Read the article

  • Newbie seeking advice on programming in general

    - by user974685
    need some of you to remember back to a time when you might have been bad at programming... Been at my new job (as a software developer) for a couple of months now, passed probation period. Have very little programming experience (C++ only) and am currently working with asp.net MVC and silverlight. So there's a website the company has been working on and I am joining the effort to make it better, iron out bugs etc. The problem is - learning about a system/website which has already been made, via visual studio. I ALWAYS feel HUGELY overwhelmed, never knowing which part of this line should I look up, and generally having lots of trouble getting the big picture. Visual studio itself is something I'm finding it difficult to get to grips with, let alone the asp.net framework. I get the impression that because my coworkers have more experience than me, they are getting all the good jobs, and I am left with crap to do - stuff which is not even vaguely programming. Meaning they are learning/creating more, and I am learning/creating near nothing. I'm getting demoralised, and too scared to say anything. I'm not stupid, I've read and practiced plenty of the fundamental programming concepts...I'm just bloody scared of this damn framework. I look at it and just feel paralyzed. The result is that I keep asking the older veteran guy of questions, and he is getting irritated, and would rather give me easy/mindless/non programming jobs to avoid wasting time with helping me out. Then when I don't understand something, I'm hesitating about whether or not I should ask him yet, and trying to decide if it would be a waste of time. I'm the kind of person who picks things up slowly, but with a lot of attention to detail. The former I think is making me look incompetent though. Anyone get where I'm coming from please say something helpful....I'm scared of losing my job in a few months or something...

    Read the article

  • Optimizing lifestyle and training

    - by Gabe
    I am a college freshman who has recently discovered a passion for computer science. Having had my first lick of formal python training last semester, I have cast aside my previously hedonist way of life and tunneled my sights on becoming the most rounded and proficient programmer I can be. I know that I'm taking strides in the right direction (I've stopped smoking, I've been exercising every day, I've taught myself C++ and OpenGL, and I've begun training in kung-fu and meditation), yet I am still finding myself struggling to achieve satisfactory results. I would like to be able to spend a good 3-4 hours every day burning through textbooks. I have the time cleared and the resources allocated. The problem lies in the logistics-- I have never taken anything seriously before. Recently I've realized that I am clueless when it comes to taking care of myself and gaining control of my mind, and it drastically hinders my productivity. My question is this: How can I learn to manage my time and take care of myself such that I can spend the maximum amount of time every day studying with steady concentration? Personal tricks would be key here: techniques you use to get yourself to sleep, a diet that yields focus, even computer break stretching routines or active reading techniques. Anything you could think of here would be great. I was a low-life in high school and I have the drive to turn my life around, I'm just quite a bit behind in the way of good habits :)

    Read the article

  • Dummy output after upgrade from 12.04 to 12.10, even though sound card is detected

    - by user115441
    So I just recently upgraded my system from Ubuntu 12.04 to 12.10. However, when I booted into 12.10 for the first time, no sound comes out of my speakers. I checked the sound settings and the Dummy Output was the only thing showing up. I used "hwinfo --sound" to check to see if my sound card was actually installed, and it was installed. hwinfo --sound hal.1: read hal dataprocess 2687: arguments to dbus_move_error() were incorrect, assertion "(dest) == NULL || !dbus_error_is_set ((dest))" failed in file ../../dbus/dbus-errors.c line 282. This is normally a bug in some application using the D-Bus library. libhal.c 3483 : Error unsubscribing to signals, error=The name org.freedesktop.Hal was not provided by any .service files 11: PCI 1b.0: 0403 Audio device [Created at pci.318] Unique ID: u1Nb._aiKlM91Nt0 SysFS ID: /devices/pci0000:00/0000:00:1b.0 SysFS BusID: 0000:00:1b.0 Hardware Class: sound Model: "Intel 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller" Vendor: pci 0x8086 "Intel Corporation" Device: pci 0x2668 "82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller" SubVendor: pci 0x107b "Gateway 2000" SubDevice: pci 0x4040 Revision: 0x04 Driver: "snd_hda_intel" Driver Modules: "snd_hda_intel" Memory Range: 0x50240000-0x50243fff (rw,non-prefetchable) IRQ: 44 (91 events) Module Alias: "pci:v00008086d00002668sv0000107Bsd00004040bc04sc03i00" Driver Info #0: Driver Status: snd_hda_intel is active Driver Activation Cmd: "modprobe snd_hda_intel" Config Status: cfg=new, avail=yes, need=no, active=unknown I'm not sure what to do here. The only time the sound will actually work is when I boot into my Windows partition and then reboot into Ubuntu. I mean I don't want to have to do that every time I want to use Ubuntu. I would really appreciate any help I can get on here.

    Read the article

  • Friday Fun: Building Blasters 2

    - by Mysticgeek
    After dealing with unnecessary spreadsheets and TPS reports all week, it’s time to waste time playing a flash game. Today we take a look at Building Blasters 2 where you strategically place explosives to bring down structures. Building Blasters 2 You need to place explosives carefully to clear areas in the red level, keep bystanders safe, and manage your budget. After placing the explosives on the structure, you can set the amount of time that passes before they blow. This comes in handy when you reach advanced levels. When you’re ready to start the demolition click on the Detonate button and watch the buildings fall. If you don’t achieve the objectives, you will get the Demolition Error screen and can replay the level. After you’ve received enough money, you’ll get a message between missions telling you there is enough money to buy items in the shop. You can get enhanced destructive devices such as nitroglycerin, a wrecking ball, call in an air strike and more… If you’re sick of the pointy haired boss dragging you down all week, pretend the structures are the office building and destroy away. Building Blasters 2 is a great way to have fun and let off steam so you can enjoy your weekend. Play Building Blasters 2 For additional fun games to play, make sure and check out the How-To Geek Arcade. Similar Articles Productive Geek Tips Friday Fun: Demolition CityFriday Fun: Cargo BridgeFriday Fun: Portal, the Flash VersionFriday Fun: VehiclesFriday Fun: Play Bubble Quod TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • How to find the Fastest DNS servers to host our domain?

    - by Denis Volovik
    The question was born because lately we've seen a pretty odd (well, at least for us, for the first time) - error message in Google webmaster tools - "DNS lookup timeout" ... I was pretty sure that with eNom's 5 DNS servers (dns1... to dns5.name-services.com) we're pretty set... But it appears that from (Europe/Hungary), for example - dns1.name-services.com takes 170ms. to respond on a ping... while GoDaddy's ns75.domaincontrol.com - takes only 40 ms. to respond... and at the same time - dns2 to dns5.name-services.com - each result with a timeout error (on ping)... This issue came to our attention right in the final stages of optimizing our web-site (almost to death) - basically, just in time... I would love to move our domains to a fast (fastest?) and reliable DNS server.. - but how do I find one ? Also - I did the ping tests from various geographic locations (we have servers in many countries) and GoDaddy seemed to be faster than eNom almost in every case. I'd be very thankful for any hints on this! Edited: Well.. maybe this one does not have an answer, after all...

    Read the article

  • Any good tutorials all this web programming stuff for a GUI person? [closed]

    - by supercheetah
    For some reason, I am having a hard time understanding all this web programming stuff--from AJAX to JSON, etc. I've got plenty of experience programming GUIs. I'm currently working on a project in Python, and I thought that maybe I could just use PyJS (since it's GWT for Python, it uses an API that's very familiar to experienced GUI programmers like myself) to compile it with a Javascript interface on top, but alas, the compiler gave me a spectacular failure. It's obviously not meant to handle much of any Python beyond itself, and some of the core Python library. It would have been nice if it could, but I will admit, it would have been the lazy way to do it. I tried to learn Django, but for some reason, I'm just having a hard time understanding the tutorial on their website, and what it's all doing. Maybe it's not the best framework to learn, perhaps? Anyway, does anyone have a good primer/tutorial explaining all this stuff, especially for Python, and especially for someone coming from a GUI background?

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • What I&rsquo;m Reading &ndash; Microsoft Silverlight 4 Business Application Development: Beginner&rs

    - by Dave Campbell
    I don’t have a lot of time for reading lately, so James Patterson and all those guys are *way* ahead of me … but I do try to make time to read technical material. A couple books have come across just recently and I thought I’d mention them one at a time. The book I want to mention tonight is Microsoft Silverlight 4 Business Application Development: Beginner’s Guide : by Cameron Albert and Frank LaVigne. Cameron and Frank are both great guys and you’ve seen their blog posts come across my SilverlightCream posts many times. I like the writing and format of the book. It leads you quite well from one concept to the next and for a technical book, it holds your interest. You can check out a free chapter here. I have the eBook because for technical material, at least lately, I’ve gravitated toward that. I can have it with me on a USB stick at work, or at home. Read the free chapter then check out their blogs. Even if you think you know a lot of this material, I think you’ll find yourself learning something, and besides, it’s a great one-place reference. Good work guys! Technorati Tags: Silverlight 4

    Read the article

  • Top 10 Posts in 2010

    - by dwahlin
    Blogging’s a lot of fun and a great way to share what you’ve learned. It’s also a great way to learn based upon comments people leave that help you see things in an entirely new way in some cases.  Since we’ve now moved on to 2011 (Happy New Year’s!) I wanted to list the Top 10 posts from my blog during 2010 based on individual views.  Thanks to everyone who follows my blog and adds comments from time to time. Here’s wishing everyone a great 2011!   1. Reducing Code by Using jQuery Templates 2. Integrating HTML into Silverlight Applications 3. Silverlight is Dead, the Moon is Made of Cheese, and HTML 5 is Ready for Prime Time 4. Understanding the Role of Commanding in Silverlight 4 Applications 5. New Article – Getting Started with WCF RIA Services 6. Simplify Your Code with LINQ 7. My Favorite iPad Apps….So Far 8. Final Release of Silverlight Tools for Visual Studio 2010 Released 9. Handling WCF Service Paths in Silverlight 4 – Relative Path Support 10. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application   Getting Started with the MVVM Pattern in Silverlight Applications – Posted late 2009 so I’m giving it honorable mention status since it’s still one of the most popular posts.

    Read the article

  • Doing "it all from scratch" is different than OS-specifics

    - by Bigyellow Bastion
    Many people tell me that in order to write my own operating system I need to understand the inner-workings of the one I'll be writing it on. That's nonsense. I mean I understand it for education purposes, such as studying the workings of a current OS to gain better knowledge of writing one myself. But the OS I'm writing it on is nothing but my scratchpad offering me software to write the code in, and software to assemble/compile my code into executable instructions. I've been told that I need to decide which OS I'm writing it on before I write it, but all I need is an assembler to produce flat binary, or a compiler to produce object code and a linker to link it into a flat binary .bin file. Why do people say it matters which OS you make an OS on, when it doesn't?

    Read the article

  • Limiting game loop to exactly 60 tics per second (Android / Java)

    - by user22241
    So I'm having terrible problems with stuttering sprites. My rendering and logic takes less than a game tic (16.6667ms) However, although my game loop runs most of the time at 60 ticks per second, it sometimes goes up to 61 - when this happens, the sprites stutter. Currently, my variables used are: //Game updates per second final int ticksPerSecond = 60; //Amount of time each update should take final int skipTicks = (1000 / ticksPerSecond); This is my current game loop @Override public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub //This method will run continuously //You should call both 'render' and 'update' methods from here //Set curTime initial value if '0' //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); //Time correction to compensate for the missing .6667ms when using int values nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; //Increase loops loops++; } render(); } I realise that my skipTicks is an int and therefore will come out as 16 rather that 16.6667 However, I tried changing it (and ticksPerSecond) to Longs but got the same problem). I also tried to change the timer used to Nanotime and skiptics to 1000000000/ticksPerSecond, but everything just ran at about 300 ticks per seconds. All I'm attempting to do is to limit my game loop to 60 - what is the best way to guarantee that my game updates never happen at more than 60 times a second? Please note, I do realise that very very old devices might not be able to handle 60 although I really don't expect this to happen - I've tested it on the lowest device I have and it easily achieves 60 tics. So I'm not worried about a device not being able to handle the 60 ticks per second, but rather need to limit it - any help would be appreciated.

    Read the article

  • SQLAuthority News – Mark the Date: October 16, 2013 – Introducing NuoDB Blackbirds: THE Distributed Database

    - by Pinal Dave
    I am very excited to announce first on this blog about the release of NuoDB Blackbirds (NuoDB Release 2.0). NuoDB is my favorite application to work with data now a days. They are increasingly gaining market share as well as brining out new features with their every new release. I was very excited when I learned that NuoDB is releasing their flagship release of 2.0 on October 16, 2013. Interesting enough I will be in USA while this release happens and I will be watching it live during my day time. Even though if I had to stay up the entire night to just watch this release, I would do it. Here is the details of the announcements: Introducing NuoDB Blackbirds: THE Distributed Database Date: October 16, 2013 Time: 1:00 PM EDT Location: Online Registration Link What is the best DBMS architecture to handle today’s and tomorrow’s evolving needs? The days of shared disk are over. The times are “a-changin” and IT infrastructure has to change with them. Join NuoDB live for the introduction of our latest major product release, NuoDB Blackbirds, and take a look at why the NuoDB distributed database architecture is the only answer for customers like Fathom Voice, a leading provider of Voice Over IP (VoIP). NuoDB CEO, Barry Morris, welcomes Cameron Weeks, CEO of Fathom Voice to discuss how his company is using DBMS to break away from the pack and become the hottest player in VoIP. The webcast will include demonstrations of a single, logical database running in multiple geographies and a live Q&A. If due to any reason, you cannot watch it live, do not worry at all, just register at this Registration Link, as after the event you will get the link to watch the event on-demand. You can watch the launch event at any time if you have registered for the launch. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • Breaking through the class sealing

    - by Jason Crease
    Do you understand 'sealing' in C#?  Somewhat?  Anyway, here's the lowdown. I've done this article from a C# perspective, but I've occasionally referenced .NET when appropriate. What is sealing a class? By sealing a class in C#, you ensure that you ensure that no class can be derived from that class.  You do this by simply adding the word 'sealed' to a class definition: public sealed class Dog {} Now writing something like " public sealed class Hamster: Dog {} " you'll get a compile error like this: 'Hamster: cannot derive from sealed type 'Dog' If you look in an IL disassembler, you'll see a definition like this: .class public auto ansi sealed beforefieldinit Dog extends [mscorlib]System.Object Note the addition of the word 'sealed'. What about sealing methods? You can also seal overriding methods.  By adding the word 'sealed', you ensure that the method cannot be overridden in a derived class.  Consider the following code: public class Dog : Mammal { public sealed override void Go() { } } public class Mammal { public virtual void Go() { } } In this code, the method 'Go' in Dog is sealed.  It cannot be overridden in a subclass.  Writing this would cause a compile error: public class Dachshund : Dog { public override void Go() { } } However, we can 'new' a method with the same name.  This is essentially a new method; distinct from the 'Go' in the subclass: public class Terrier : Dog { public new void Go() { } } Sealing properties? You can also seal seal properties.  You add 'sealed' to the property definition, like so: public sealed override string Name {     get { return m_Name; }     set { m_Name = value; } } In C#, you can only seal a property, not the underlying setters/getters.  This is because C# offers no override syntax for setters or getters.  However, in underlying IL you seal the setter and getter methods individually - a property is just metadata. Why bother sealing? There are a few traditional reasons to seal: Invariance. Other people may want to derive from your class, even though your implementation may make successful derivation near-impossible.  There may be twisted, hacky logic that could never be second-guessed by another developer.  By sealing your class, you're protecting them from wasting their time.  The CLR team has sealed most of the framework classes, and I assume they did this for this reason. Security.  By deriving from your type, an attacker may gain access to functionality that enables him to hack your system.  I consider this a very weak security precaution. Speed.  If a class is sealed, then .NET doesn't need to consult the virtual-function-call table to find the actual type, since it knows that no derived type can exist.  Therefore, it could emit a 'call' instead of 'callvirt' or at least optimise the machine code, thus producing a performance benefit.  But I've done trials, and have been unable to demonstrate this If you have an example, please share! All in all, I'm not convinced that sealing is interesting or important.  Anyway, moving-on... What is automatically sealed? Value types and structs.  If they were not always sealed, all sorts of things would go wrong.  For instance, structs are laid-out inline within a class.  But what if you assigned a substruct to a struct field of that class?  There may be too many fields to fit. Static classes.  Static classes exist in C# but not .NET.  The C# compiler compiles a static class into an 'abstract sealed' class.  So static classes are already sealed in C#. Enumerations.  The CLR does not track the types of enumerations - it treats them as simple value types.  Hence, polymorphism would not work. What cannot be sealed? Interfaces.  Interfaces exist to be implemented, so sealing to prevent implementation is dumb.  But what if you could prevent interfaces from being extended (i.e. ban declarations like "public interface IMyInterface : ISealedInterface")?  There is no good reason to seal an interface like this.  Sealing finalizes behaviour, but interfaces have no intrinsic behaviour to finalize Abstract classes.  In IL you can create an abstract sealed class.  But C# syntax for this already exists - declaring a class as a 'static', so it forces you to declare it as such. Non-override methods.  If a method isn't declared as override it cannot be overridden, so sealing would make no difference.  Note this is stated from a C# perspective - the words are opposite in IL.  In IL, you have four choices in total: no declaration (which actually seals the method), 'virtual' (called 'override' in C#), 'sealed virtual' ('sealed override' in C#) and 'newslot virtual' ('new virtual' or 'virtual' in C#, depending on whether the method already exists in a base class). Methods that implement interface methods.  Methods that implement an interface method must be virtual, so cannot be sealed. Fields.  A field cannot be overridden, only hidden (using the 'new' keyword in C#), so sealing would make no sense.

    Read the article

  • A story from SQLvdb and Idera

    - by Peter Larsson
    A year or so back, I struggled with some consistency problems so I figured out I needed a way to "mount" backup files as a virtual database. At the time (SQL Server 2005 and SQL Server 2008) my choice fell on Idera's SQLvdb because it felt easy enough to use. I used it a few times and it worked great. Some time later we upgraded to SQL Server 2008R2 and I didn't use SQLvbd for a long time. Until yesterday... I was upset that suddenly SQLvbd took more than 2 hours to mount the backup file (if it succeeded at all). I uninstalled the application and went for lunch. After lunch, I decided to give SQLvbd another chance so I emailed their tech support and got a response within 30 minutes or so. Now, since I am a SQL Server MVP, they gave me another serial number than my first and I downloaded and installed a newer version. But also this version was really slow. I emailed back to them with the additional information they requested and to my surprise I had got an email this morning when I came back to work, where Idera explained some of the issues (bugs) and asked my to test a newer version. I did, and now a fresh mount of a 100GB database (compressed to 20GB with native compression) located on our SAN takes less than 6 minutes! Thank you. //Peter

    Read the article

  • Knowing 11i HRMS Family Pack K Rollup 5

    - by Robert Story
    Upcoming WebcastTitle: Knowing 11i HRMS Family Pack K Rollup 5Date: 20-Apr-2010  and  27-Apr-2010Time: 11:00 AM EST / 8:00 AM PST / 8:30 PM IST  Product Family: EBS HRMSSummaryThe webcast will focus on providing customers with essential information to ensure the smooth and successful installation of 11i HRMS Family Pack K Rollup 5. All the critical 11i HRMS Family Pack K Rollup 5 information such as prerequisites and known issues will be discussed in the webcast. A close review on common patching and installation problems including frequently asked questions and regularly encountered errors are also included.Details: Session 1Date and Time 20-Apr-2010 11:00 AMTimezone (UTC-05:00) US Eastern TimeDuration 1 HourRegister for this sessionDetails: Session 2Date and Time 27-Apr-2010 6:00 AMTimezone (UTC-05:00) US Eastern TimeDuration 1 HourRegister for this sessionDetails: Session 3Date and Time 27-Apr-2010 7:00 PMTimezone (UTC-05:00) US Eastern TimeDuration 1 HourRegister for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How do I improve my code reading skills

    - by Andrew
    Well the question is in the title - how do I improve my code reading skills. The software/hardware environment I currently do development in is quite slow with respect to compilation times and time it takes the whole system to test. The system is quite old/complex and thus splitting it into a several smaller, more manageable sub-projects is not feasible in a neare future. I have realized is what really hinders the development progress is my code reading skills. How do I improve my code reading skills, so I can spot most of the errors and issues in the code even before I hit the "do compile" key, even before I start the debugger?

    Read the article

  • How to interpret number of URL errors in Google webmaster tools

    - by user359650
    Recently Google has made some changes to Webmaster tools which are explained below: http://googlewebmastercentral.blogspot.com/2012/03/crawl-errors-next-generation.html One thing I could not find out is how to interpret the number of errors over time. At the end of February we've recently migrated our website and didn't implement redirect rules for some pages (quite a few actually). Here is what we're getting from the Crawl errors: What I don't know is if the number of errors is cumulative over time or not (i.e. if Google bots crawl your website on 2 different days and find 1 separate issue on each day, whether they will report 1 error for each day, or 1 for the 1st, and 2 for the 2nd). Based on the Crawl stats we can see that the number of requests made by Google bots doesn't increase: Therefore I believe the number of errors reported is cumulative and that an error detected on 1 day is taken into account and reported on the subsequent days until the underlying problem is fixed and the page it's crawled again (or if you manually Mark as fixed the error) because if you don't make more requests to a website, there is no way you can check new pages and old pages at the same time. Q: Am I interpreting the number of errors correctly?

    Read the article

  • What are the advantages and challenges of using HaXe over ActionScript 3?

    - by Arthur Wulf White
    I read this: http://webr3.org/blog/haxe/bitmapdata-vectors-bytearrays-and-optimization/ And this: http://stackoverflow.com/questions/5220045/what-are-the-pro-and-cons-of-using-haxe-over-actionscript-3 http://www.haxenme.org/developers/documentation/actionscript-developers/ I read there are only two books, is that right? http://haxe.org/doc/book I also see that using FlashDevelop you can make an Adobe Air project with HaXe and compile to exe. And begun to wonder from a game making perspective: I saw the example in the first link, are there any additional advantages in performance when using HaXe instead of AS3 for games development (for the web)?

    Read the article

  • How to integrate the .gdf with a specific exe for Games Explorer

    - by Kraemer
    Hello, I want to create an installer for a game and after that an icon to be put in Games Explorer for Win Vista and Win 7. I have created the GDF (game definitions file), then build the script for project and obtained the .h, GDF and .rc files. But i can't compile using Visual Studio 2010 the .rc file into an executable to be used after that to create the installer. Some error is popping up after i set the executable path "Could not load file or assembly'Microsoft.VisualStudio.HpcDebugger.Impl, Version 10.0.0.0, Culture=neutral, PublickKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified." Any ideas what i'm doing wrong ? I need to mention that i've never worked before with GDF Editor and Visual Studio. Any answer would be highly appreciated.Thanks!

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >