Search Results

Search found 318 results on 13 pages for 'scores'.

Page 5/13 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Circle drawing with SVG's arc path

    - by ????
    The following SVG path can draw 99.99% of a circle: (try it on http://jsfiddle.net/DFhUF/46/ and see if you see 4 arcs or only 2, but note that if it is IE, it is rendered in VML, not SVG, but have the similar issue) M 100 100 a 50 50 0 1 0 0.00001 0 But when it is 99.99999999% of a circle, then nothing will show at all? M 100 800 a 50 50 0 1 0 0.00000001 0 And that's the same with 100% of a circle (it is still an arc, isn't it, just a very complete arc) M 100 800 a 50 50 0 1 0 0 0 How can that be fixed? The reason is I use a function to draw a percentage of an arc, and if I need to "special case" a 99.9999% or 100% arc to use the circle function, that'd be kind of silly. Again, a test case on jsfiddle using RaphaelJS is at http://jsfiddle.net/DFhUF/46/ (and if it is VML on IE 8, even the second circle won't show... you have to change it to 0.01) Update: This is because I am rendering an arc for a score in our system, so 3.3 points get 1/3 of a circle. 0.5 gets half a circle, and 9.9 points get 99% of a circle. But what if there are scores that are 9.99 in our system? Do I have to check whether it is close to 99.999% of a circle, and use an arc function or a circle function accordingly? Then what about a score of 9.9987? Which one to use? It is ridiculous to need to know what kind of scores will map to a "too complete circle" and switch to a circle function, and when it is "a certain 99.9%" of a circle or a 9.9987 score, then use the arc function.

    Read the article

  • GAE python database object design for simple list of values

    - by Joey
    I'm really new to database object design so please forgive any weirdness in my question. Basically, I am use Google AppEngine (Python) and contructing an object to track user info. One of these pieces of data is 40 Achievement scores. Do I make a list of ints in the User object for this? Or do I make a separate entity with my user id, the achievement index (0-39) and the score and then do a query to grab these 40 items every time I want to get the user data in total? The latter approach seems more object oriented to me, and certainly better if I extend it to have more than just scores for these 40 achievements. However, considering that I might not extend it, should I even consider just doing a simple list of 40 ints in my user data? I would then forgo doing a query, getting the sorted list of achievements, reading the score from each one just to process a response etc. Is doing this latter approach just such a common practice and hand-waved as not even worth batting an eyelash at in terms of thinking it might be more costly or complex processing wise?

    Read the article

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • Solr/Lucene Scorer

    - by TFor
    We are currently working on a proof-of-concept for a client using Solr and have been able to configure all the features they want except the scoring. Problem is that they want scores that make results fall in buckets: Bucket 1: exact match on category (score = 4) Bucket 2: exact match on name (score = 3) Bucket 3: partial match on category (score = 2) Bucket 4: partial match on name (score = 1) First thing we did was develop a custom similarity class that would return the correct score depending on the field and an exact or partial match. The only problem now is that when a document matches on both the category and name the scores are added together. Example: searching for "restaurant" returns documents in the category restaurant that also have the word restaurant in their name and thus get a score of 5 (4+1) but they should only get 4. I assume for this to work we would need to develop a custom Scorer class but we have no clue on how to incorporate this in Solr. Another option is to create a custom SortField implementation similar to the RandomSortField already present in Solr. Maybe there is even a simpler solution that we don't know about. All suggestions welcome!

    Read the article

  • Resource placement (optimal strategy)

    - by blackened
    I know that this is not exactly the right place to ask this question, but maybe a wise guy comes across and has the solution. I'm trying to write a computer game and I need an algorithm to solve this question: The game is played between 2 players. Each side has 1.000 dollars. There are three "boxes" and each player writes down the amount of money he is going to place into those boxes. Then these amounts are compared. Whoever placed more money in a box scores 1 point (if draw half point each). Whoever scores more points wins his opponents 1.000 dollars. Example game: Player A: [500, 500, 0] Player B: [333, 333, 334] Player A wins because he won Box A and Box B (but lost Box C). Question: What is the optimal strategy to place the money? I have more questions to ask (algorithm related, not math related) but I need to know the answer to this one first. Update (1): After some more research I've learned that these type of problems/games are called Colonel Blotto Games. I did my best and found few (highly technical) documents on the subject. Cutting it short, the problem I have (as described above) is called simple Blotto Game (only three battlefields with symmetric resources). The difficult ones are the ones with, say, 10+ battle fields with non-symmetric resources. All the documents I've read say that the simple Blotto game is easy to solve. The thing is, none of them actually say what that "easy" solution is.

    Read the article

  • XP Leveling System - PHP

    - by Michael Rich
    Rank Table ID, Primary Key RANK, The rank or level, 1 being the highest and 3 the lowest MIN_SCORE, The minimum amount of point or XP needed to reach the rank NAME, The associated name of the rank Rank Table +----+------+-----------+-------------------------+ | ID | RANK | MIN_SCORE | NAME | +----+------+-----------+-------------------------+ | 1 | 1 | 18932 | Editor-in-Chief | | 2 | 2 | 15146 | Senior Technical Writer | | 3 | 3 | 12116 | Senior Copywriter | +----+------+-----------+-------------------------+ Ranking Table ID, Primary Key FK_MEMEBER_ID, Foreign Key to member's Primary Key FK_RANK, Foreign Key to Author Rank Table's Rank column (top) SCORE, The member's current earned score or XP Ranking Table +-----+--------------+---------+-------+ | ID | FK_MEMBER_ID | FK_RANK | SCORE | +-----+--------------+---------+-------+ | 1 | 1 | 1 | 17722 | | 2 | 2 | 2 | 16257 | | 3 | 3 | 3 | 12234 | +-----+--------------+---------+-------+ In my class I have stored the ranks -- matching those in the Rank Table -- and correlating minimum scores; RANK as key and MINIMUM_SCORE as value. When a member's score (XP) is updated (up/down) I want to test that updated score against the below array to determine if their rank needs updating too. private $scores = array('3' => '12116', '2' => '15146', '1' => '18932',); Using the updated score, how could I determine the correlating rank from the above array? Everything is open to scrutiny, this is my first time creating a ranking system so I hope to get it right :)

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Best tree/heap data structure for fixed set of nodes with changing values + need top 20 values?

    - by user350139
    I'm writing something like a game in C++ where I have a database table containing the current score for each user. I want to read that table into memory at the start of the game, quickly change each user's score while the game is being played in response to what each user does, and then when the game ends write the current scores back to the database. I also want to be able to find the 20 or so users with the highest scores. No users will be added or deleted during the short period when the game is being played. I haven't tried it yet, but updating the database might take too much time during the period when the game is being played. Fixed set of users (might be 10,000 to 50,000 users) Will map user IDs to their score and other user-specific information. User IDs will be auto_increment values. If the structure has a high memory overhead that's probably not an issue. If the program crashes during gameplay it can just be re-started. Quickly get a user's current score. Quickly add to a user's current score (and return their current score) Quickly get 20 users with highest score. No deletes. No inserts except when the structure is first created, and how long that takes isn't critical. Getting the top 20 users will only happen every five or ten seconds, but getting/adding will happen much more frequently. If not for the last, I could just create a memory block equal to sizeof(user) * max(user id) and put each user at user id * sizeof(user) for fast access. Should I do that plus some other structure for the Top 20 feature, or is there one structure that will handle all of this together?

    Read the article

  • The Challenge with HTML5 – In Pictures

    - by dwahlin
    I love working with Web technologies and am looking forward to the new functionality that HTML5 will ultimately bring to the table (some of which can be used today). Having been through the div versus layer battle back in the IE4 and Netscape 4 days I think we’re headed down that road again as a result of browsers implementing features differently. I’ve been spending a lot of time researching and playing around with HTML5 samples and features (mainly because we’re already seeing demand for training on HTML5) and there’s a lot of great stuff there that will truly revolutionize web applications as we know them. However, browsers just aren’t there yet and many people outside of the development world don’t really feel a need to upgrade their browser if it’s working reasonably well (Mom and Dad come to mind) so it’s going to be awhile. There’s a nice test site at http://www.HTML5Test.com that runs through different HTML5 features and scores how well they’re supported. They don’t test for everything and are very clear about that on the site: “The HTML5 test score is only an indication of how well your browser supports the upcoming HTML5 standard and related specifications. It does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform. The score is calculated by testing for the many new features of HTML5. Each feature is worth one or more points. Apart from the main HTML5 specification and other specifications created the W3C HTML Working Group, this test also awards points for supporting related drafts and specifications. Some of these specifications were initially part of HTML5, but are now further developed by other W3C working groups. WebGL is also part of this test despite not being developed by the W3C, because it extends the HTML5 canvas element with a 3d context. The test also awards bonus points for supporting audio and video codecs and supporting SVG or MathML embedding in a plain HTML document. These test do not count towards the total score because HTML5 does not specify any required audio or video codec. Also SVG and MathML are not required by HTML5, the specification only specifies rules for how such content should be embedded inside a plain HTML file. Please be aware that the specifications that are being tested are still in development and could change before receiving an official status. In the future new tests will be added for the pieces of the specification that are currently still missing. The maximum number of points that can be scored is 300 at this moment, but this is a moving goalpost.” It looks like their tests haven’t been updated since June, but the numbers are pretty scary as a developer because it means I’m going to have to do a lot of browser sniffing before assuming a particular feature is available to use. Not that much different from what we do today as far as browser sniffing you say? I’d have to disagree since HTML5 takes it to a whole new level. In today’s world we have script libraries such as jQuery (my personal favorite), Prototype, script.aculo.us, YUI Library, MooTools, etc. that handle the heavy lifting for us. Until those libraries handle all of the key HTML5 features available it’s going to be a challenge. Certain features such as Canvas are supported fairly well across most of the major browsers while other features such as audio and video are hit or miss depending upon what codec you want to use. Run the tests yourself to see what passes and what fails for different browsers. You can also view the HTML5 Test Suite Conformance Results at http://test.w3.org/html/tests/reporting/report.htm (a work in progress). The table below lists the scores that the HTML5Test site returned for different browsers I have installed on my desktop PC and laptop. A specific list of tests run and features supported are given when you go to the site. Note that I went ahead and tested the IE9 beta and it didn’t do nearly as good as I expected it would, but it’s not officially out yet so I expect that number will change a lot. Am I opposed to HTML5 as a result of these tests? Of course not - I’m actually really excited about what it offers.  However, I’m trying to be realistic and feel it'll definitely add a new level of headache to the Web application development process having been through something like this many years ago. On the flipside, developers that are able to target a specific browser (typically Intranet apps) or master the cross-browser issues are going to release some pretty sweet applications. Check out http://html5gallery.com/ for a look at some of the more cutting-edge sites out there that use HTML5. Also check out the http://www.beautyoftheweb.com site that Microsoft put together to showcase IE9. Chrome 8 Safari 5 for Windows     Opera 10 Firefox 3.6     Internet Explorer 9 Beta (Note that it’s still beta) Internet Explorer 8

    Read the article

  • Is JSF really ready to deliver high performance web applications?

    - by aklin81
    I have heard a lot of good about JSF but as far as I know people also had lots of serious complains with this technology in the past, not aware of how much the situation has improved. We are considering JSF as a probable technology for a social network project. But we are not aware of the performance scores of JSF neither we could really come across any existing high performance website that had been using JSF. People complain about its performance scalability issues. We are still not very sure if we are doing the right thing by choosing jsf, and thus would like to hear from you all about this and take your inputs into consideration. Is it possible to configure JSF to satisfy the high performance needs of social networking service ? Also till what extent is it possible to survive with the current problems in JSF.

    Read the article

  • How to Use An Antivirus Boot Disc or USB Drive to Ensure Your Computer is Clean

    - by Chris Hoffman
    If your computer is infected with malware, running an antivirus within Windows may not be enough to remove it. If your computer has a rootkit, the malware may be able to hide itself from your antivirus software. This is where bootable antivirus solutions come in. They can clean malware from outside the infected Windows system, so the malware won’t be running and interfering with the clean-up process. The Problem With Cleaning Up Malware From Within Windows Standard antivirus software runs within Windows. If your computer is infected with malware, the antivirus software will have to do battle with the malware. Antivirus software will try to stop the malware and remove it, while the malware will attempt to defend itself and shut down the antivirus. For really nasty malware, your antivirus software may not be able to fully remove it from within Windows. Rootkits, a type of malware that hides itself, can be even trickier. A rootkit could load at boot time before other Windows components and prevent Windows from seeing it, hide its processes from the task manager, and even trick antivirus applications into believing that the rootkit isn’t running. The problem here is that the malware and antivirus are both running on the computer at the same time. The antivirus is attempting to fight the malware on its home turf — the malware can put up a fight. Why You Should Use an Antivirus Boot Disc Antivirus boot discs deal with this by approaching the malware from outside Windows. You boot your computer from a CD or USB drive containing the antivirus and it loads a specialized operating system from the disc. Even if your Windows installation is completely infected with malware, the special operating system won’t have any malware running within it. This means the antivirus program can work on the Windows installation from outside it. The malware won’t be running while the antivirus tries to remove it, so the antivirus can methodically locate and remove the harmful software without it interfering. Any rootkits won’t be able to set up the tricks they use at Windows boot time to hide themselves from the rest o the operating system. The antivirus will be able to see the rootkits and remove them. These tools are often referred to as “rescue disks.” They’re meant to be used when you need to rescue a hopelessly infected system. Bootable Antivirus Options As with any type of antivirus software, you have quite a few options. Many antivirus companies offer bootable antivirus systems based on their antivirus software. These tools are generally free, even when they’re offered by companies that specialized in paid antivirus solutions. Here are a few good options: avast! Rescue Disk – We like avast! for offering a capable free antivirus with good detection rates in independent tests. avast! now offers the ability to create an antivirus boot disc or USB drive. Just navigate to the Tools -> Rescue Disk option in the avast! desktop application to create bootable media. BitDefender Rescue CD – BitDefender always seems to receive good scores in independent tests, and the BitDefender Rescue CD offers the same antivirus engine in the form of a bootable disc. Kaspersky Rescue Disk – Kaspersky also receives good scores in independent tests and offers its own antivirus boot disc. These are just a handful of options. If you prefer another antivirus for some reason — Comodo, Norton, Avira, ESET, or almost any other antivirus product — you’ll probably find that it offers its own system rescue disk. How to Use an Antivirus Boot Disc Using an antivirus boot disc or USB drive is actually pretty simple. You’ll just need to find the antivirus boot disc you want to use and burn it to disc or install it on a USB drive. You can do this part on any computer, so you can create antivirus boot media on a clean computer and then take it to an infected computer. Insert the boot media into the infected computer and then reboot. The computer should boot from the removable media and load the secure antivirus environment. (If it doesn’t, you may need to change the boot order in your BIOS or UEFI firmware.) You can then follow the instructions on your screen to scan your Windows system for malware and remove it. No malware will be running in the background while you do this. Antivirus boot discs are useful because they allow you to detect and clean malware infections from outside an infected operating system. If the operating system is severely infected, it may not be possible to remove — or even detect — all the malware from within it. Image Credit: aussiegall on Flickr     

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • How Will I Know When My Exam Results Are Available?

    - by Brandye Barrington
    On November 15, 2012, Oracle Certification exam results became available directly from Oracle's certification portal, CertView. This change requires candidates to authenticate their CertView accounts before being able to access their exam results. The Oracle Certification team has developed a series of videos to help candidates through this new process.  Check back often as we will be highlighting these videos over the coming weeks. You can find all the information you will need on this new process along with relevant questions and answers on our website. As always, you can contact Oracle Certification Support if you need assistance. YOUR QUESTIONS ANSWERED More Information FAQ: Receiving Exam Scores FAQ: How Do I Log Into CertView? FAQ: How To Get Exam Results FAQ: Accessing Exam Results in CertView FAQ: What If I Don't Get An Exam Results Email Alert? FAQ: How To Download and Print Exam Score Reports FAQ: What If I Think My Exam Results Are Wrong In CertView? FAQ: Is Oracle Changing The Way That Exams Are Scored?

    Read the article

  • Advice on starting a new job

    - by Sisiutl
    In a week I will start a new job at a manufacturing company managing the development of a new eCommerce site. The company scores about a 3 on the "Joel" test. I will inherit 3 programmers who developed the company web site and do general IT programming. I have the grey hair and credentials to have their initial respect but I'm an engineer, not a manager. I'm looking for practical advise - particularly for the first 90 days - on how to establish myself, keep the team together, and move forward.

    Read the article

  • Accessing Exam Results on CertView

    - by Brandye Barrington
    On November 15, Oracle Certification Exam results became available through Oracle's Certification portal CertView. The video above provides a more in depth look at one aspect of this new process. If you need more information this new process, you can view the full announcement on our website. As always, if you need assistance with your CertView account, please contact Oracle Certification Support for additional assistance. YOUR QUESTIONS ANSWERED More Information CertView FAQ: Receiving Exam Scores FAQ: How Do I Log Into CertView? FAQ: How To Get Exam Results FAQ: How Will I Know When My Exam Results Are Available? FAQ: What If I Don't Get An Exam Results Email Alert? FAQ: How To Download and Print Exam Score Reports FAQ: What If I Think My Exam Results Are Wrong In CertView? FAQ: Is Oracle Changing The Way That Exams Are Scored? 

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

  • What all items can I put on my resume without it looking tacky? [closed]

    - by Earlz
    I've been searching for work, and so far it's very hard for me to even get a call back. So, I'm looking at adding things to my resume. I know a resume doesn't need to be over 2 pages. I have the basics: Objective/personal info General skills (languages known, etc) Work experience Some things I'm considering adding to it: My college education (though I don't have a degree) Awards given for programming skills in high school (curriculum contests and AP test scores) Open source projects? Would any of these 3 items look tacky? And I only have about 1.5 years of work experience, but I've been programming since I was 13. Is there anything else I can add to my resume that would give me a better chance of getting my foot in the door?

    Read the article

  • How to get all keys values of the player prefs in unity [java script ]

    - by Akari
    in the first test game I've developed if the player passed all the levels and win , he must enter his name ... so his name and his score will be stored in a player prefs : and there is another scene that displays the names and scores of all the user passed the game : I've searched from the morning and try all the ways I know and finally I failed to perform this .... is it possible to display all the keys values previously stored in the player prefs ??? or can someone to provide me by a JavaScript to do this ???? thanks...

    Read the article

  • Ultra-quick Samsung WebKit review

    On Thursday I got a Samsung bada test phone (the Wave) that runs the latest installment of Samsung WebKit, and of course I subjected it to various tests. The verdict is clear: excellent browser. As far as I’m concerned it ousts Opera Mobile from my personal top three.Judge for yourself. This is what the latest Samsung WebKit supports: It scores second, after Safari 4 for desktop, in my great WebKit test. It leaves both iPhone and Android in the dust; although I haven’t yet tested Android 2.1 and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why has there been no serious research in statistical programming languages for 25 years?

    - by Robert
    The two main statistical languages today are S (in the form of R) and SAS, which today pretty much have the form they had 25 years ago. Whatever usability problems or worker productivity problems they had then, they still have today. I'm a data language designer, and I look at, largely, four aspects: Usability (learning curve & readability - here Python scores high) Productivity (how long it takes to finish your work) Flexibility (SAS and R don't have problems here, but a macro library will) Reliability (in the QA/reproducibility sense, usually a PL does better than a GUI here) By the way, I have a language that can produce complex statistical tables much faster than SAS (like 25 lines of code instead of several hundred lines of code). And I'm going to produce a language for data cleaning that will be great for usability (it'll be my third).

    Read the article

  • What sort of leaderboard for my game?

    - by Martin
    I recently published a word game for Windows Phone and I am really happy to have some players. The game is entirely offline and at the end of a game, the player's score is published to a server. I'm collecting the scores to build a leaderboard. Right now, I don't believe that the leaderboard I offer to my users is appropriate. I essentially accumulate the score of all the games of a user for a given day and that becomes their score. So if Player 1 plays 3 games and gets 100, 150 and 200 points, its score for the day is 450 points. I would like to get your ideas and opinion. How do I keep my game challenging and engaging with a good leaderboard? Should I continue accumulating the score for a day? Should I just keep the best score? Thanks!

    Read the article

  • New Exam Score Report Process Coming Soon

    - by Paul Sorensen
    Hi Everyone! I want to give you a preview of a process change that will be coming in the next few weeks. We will soon announce a change in the way that Oracle certification candidates receive their exam scores and score reports (after they take an exam). Once the change occurs you will need to have an Oracle Web Account (in order to access your exam score). This is the account that is used to log into the Oracle website for things such as OTN access, software downloads and other Oracle services. If you already have an Oracle Web Account then you are already in good shape! If you do not have an Oracle Web Account the you should create one now (in preparation for this change)!  Look for additional announcements and detailed information in the coming weeks. Thanks,

    Read the article

  • Connectify Dispatch Links Multiple Network Nodes Into a Mega Connection

    - by Jason Fitzpatrick
    Connectify Dispatch wants to change the way you interact with the networks around you by making it dead simple to mesh all available Wi-Fi, Cellular, and Ethernet connections into a massive and stable pipeline. Dispatch makes it open-and-click easy to hook up multiple Wi-Fi nodes, your cellphone, and even Ethernet connections into a single blended connection. While the video above gives a great overview of the process, check out the video below to see it in real world action: The project is currently in the last phase of KickStarter funding, so now is a great time to score Connectify Dispatch at a steep discount–pledging as little as $10 to fund the project, for example, scores you 50% of a 6-month Pro license. Hit up the link below to read more about the project, check the KickStarter status, and see all the neat features in the development pipeline. Dispatch: The Internet, Faster. [KickStarter] HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • What is a good way to measure game virality?

    - by Chris Garrett
    I have added some social features to an iPhone game (Lexitect if you're curious), such as email, Twitter, and Facebook integration for sharing high scores. Along with these features, I am measuring how many times users make it to each step. The goal of these features are to make the game more viral, and I am trying to get to a measure of game virality. I would think that a game virality metric would produce a number based on 1.0, where 1.0 = zero viral growth, and 1.01 would represent 1% viral growth over some unit of time. How is virality normally measured, and in what units? How is time capped on the metric? i.e. if I gave each player a year to determine how many recommendations they make, I wouldn't get any real numbers for a year from the time I start tracking it. Are there any standards for tracking virality in a meaningful way?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >