Search Results

Search found 2034 results on 82 pages for 'mini 110'.

Page 28/82 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • CodePlex Daily Summary for Monday, October 24, 2011

    CodePlex Daily Summary for Monday, October 24, 2011Popular ReleasesPeople's Note: People's Note 0.31: Added note tag editing. Changed note edit conflict resolution to keep the latest version. To install: copy the appropriate CAB file onto your WM device and run it.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...Workflow Automation (for Dynamics CRM 2011): Release 1.0: Initial release version 1.0.Window Manipulation with the Microsoft Touch Mouse: Window Touch v1.0: This is the initial release of the Window Touch software, which may have bugs and incomplete interactions. Please be patient with us as we work out all of the kinks, and feel free to send comments. To install and run the program download and double click the .msi file below. Make sure you already have a Microsoft Touch Mouse and can use it before installing.xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...BookShop: BookShop: BookShop WP7 clientRibbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconMVCQuick: MVCQuick 0.3.1: Features??NHibernate 3.2??Repository(ORuM) ??Spring.Net 1.3.2??Container(IoC) ??Common.Logging 1.2??Logging ASP.NET Security Provider?? ??MVCQuick.Framework??MusicStoreDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.Umbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.Naked Objects: Naked Objects Release 4.0.110.0: Corresponds to the packaged version 4.0.110.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Documentation Please note that after ...New Projects360zebra4en: 360zebra???AG: Web Framework that can leverage silverlight, but fall back on native html if silverlight is not available.BookShop: BookShop ????? ???? ? ????????????? ?????????.CompendiumImport: Import data from Dungeons & Dragons Compendium to Masterplan librariesCS 6235 Arbiter Server: CS 6235 Arbiter ServerDual numbers for automatic differentiation: Dual numbers can be used to automatically calculate numerically stable derivatives of functions.Effort - Entity Framework Unit Testing Tool: Effort is a powerful unit testing tool that brings an easy way to create unit tests for Entity Framework based applications. It can manipulate the behavior of EntityConnection or ObjectContext objects, so that the data operations are executed by a fake in-process database, while omitting the real database completely. This mechanism makes possible to remove the dependency between your unit test and the real database.FlexiCache for ASP.NET applications: This library provides extended cache capabilities to the ASP.NET applications. It includes the MongoDB and SQL Server output cache providers extending ASP.NET Output Cache capabilities by allowing to store cached data outside of the application process that is especially important in web-farm scenario. This library provides "Session-On-Demand" functionality - ability to separate ASP.NET Session data to subsets that can be stored outside of the main ASP.NET session and loaded on demand w...HtmlAgilityPackContrib - Logical extension to HtmlAgilityPack: HtmlAgilityPackContrib - A logical extension to HtmlAgilityPack to parse HTML using jQuery like methods inspired by jSoupjsonhttphandler: The JsonHttpHandler is a simple JSON oriented Http handler to easily integrate JSON GET, POST, and JSONP web services into your application.My Recent Documents: This Webpart for Sharepoint 2010 is developed for users that need help finding the last number of documents they have been working on. The target user have trouble location where he/she placed their documents. This Webpart gives the following features... 1. Locate your last edited documents.. 2. Customize how many documents the webpart should find. 3. Should it look in subsites also? 4. Show the structure in where the documents are located and click easy link to either document lib...Option pricing for arbitrary distributions: European option pricing with arbitrary distributions using dual numbers to calculate greeks.Project Tracy: ????????? ????????? ????????10??????? ???????????????????????。。。。。 ???!Tracy?????????????!(?RegSharp: RegSharp provides server functionality for Sencha Extjs (http://www.sencha.com/products/extjs/) paging grid. RegSharp implements sorting, filtering and paging logic on the server side that is required for using the paging grid. Is't developed in C#SharePoint/TFS Continuous Integration Starter Pack: Provides a customized TFS Build workflow and PowerShell scripts to get started with Continuous Integration (automated builds) in SharePoint 2010/TFS 2010. This pack will allow you to automatically build and deploy WSPs using TFS, and optionally also include automated testing as part of the build such as Visual Studio 2010's Coded UI Tests. Sharp Investor: Sharp Investor pools various online sources as well as preforms a couple local calculation to return recommended stocks. The program is easy to use, and requires very little work to find profitable stocks online. It's developed in C#.SharpXML: Aims to be a simpler library for interacting with XML files that exposes attributes and children elements as properties and objects respectively.Splash: Splash is an interactive MediaCenter-style YouTube client written in WPF.TomCdc: TomCdc is a solution which makes tracking of sql databse changes easy. Quick and simple installation process allows to start using the solution in just a few minutes. It supports all versions of Microsoft Sql server. You'll no longer have to write triggers manually to find out what process is changing data in a table.Track your work: Another work-time tracker TumblePower: TumblePower is a simplified API library for Tumblr. Use it in your application to post Text, Photos, Quotes, Links, Chats, Audio and Videos as well as set functions such as tags, dates and choose whether the post should be private or not.VisualBASH: This project aims to extends Visual Studio's language support to include bash scripting. This includes syntax highlighting, code completion, and syntax error checking.Window Manipulation with the Microsoft Touch Mouse: Window Manipulation for the Microsoft Touch Mouse provides a set of simple gestures for moving and resizing windows.Workflow Automation (for Dynamics CRM 2011): Workflow Automation for Dynamics CRM 2011 allows user to automate or schedule workflow execution via Windows Task Scheduler. XNA Model Viewer: The XNA Model Viewer allows you to load FBX files and view them. It allows you to test that models will work in XNA, determine the effect of modifying bone transforms, and view animation clips. You can examine the bones and meshes and see the complete hierarchy.

    Read the article

  • how to find and filter blobs from segment image using python?

    - by Python Team
    Am trying to detect number plate from an image.I have converted an image to grayscale and segment image. Now i have to find and filter blobs from an image and to detect number plate from an image. I will explain what i did.. I jus read segment image license_plate = cv2.imread('license1_segmented.png',cv2.CV_LOAD_IMAGE_COLOR) license_plate_size = (license_plate.shape[1], license_plate.shape[0]) mask = cv2.cv.CreateImage (license_plate_size, 8, 1) cv2.cv.Set(mask, 1) thresh_image_ipl = cv2.cv.CreateImage(license_plate_size, cv2.cv.IPL_DEPTH_8U, 1) cv2.cv.SetData(thresh_image_ipl,thresh_image.tostring(),thresh_image.dtype.itemsize * 1 * thresh_image.shape[1]) min_blob_size = 100 # Blob must be 30 px by 30 px max_blob_size = 10000 threshold = 100 **myblobs = CBlobResult(thresh_image_ipl,mask, threshold, True)** myblobs.filter_blobs(min_blob_size, max_blob_size) blob_count = myblobs.GetNumBlobs() trying to find and filter blobs from an image.But am getting error while passing the parameters to CBlobResult which i highlighted above code.I mentioned the error below what i get while passing. Traceback (most recent call last): File "rectdetect1.py", line 110, in <module> myblobs = CBlobResult(thresh_image_ipl,image_area, threshold, True) File "/home/oomsys/pyblobs-read-only/blobs/BlobResult.py", line 92, in __init__ this = _BlobResult.new_CBlobResult(*args) NotImplementedError: Wrong number or type of arguments for overloaded function 'new_CBlobResult'. Possible C/C++ prototypes are: CBlobResult::CBlobResult() CBlobResult::CBlobResult(IplImage *,IplImage *,int,bool) CBlobResult::CBlobResult(CBlobResult const &) Anyone help me to find out the erros and to solve this and all... Thanks in advance...

    Read the article

  • Feed the Beast Ultimate black screen after login 13.04?

    - by Drew S
    I get a black screen after login feed the beast ultimate splash animation. I have tried the manual LWJGL upgrading, using different versions of java (6 and 7 openJDK). no matter what I get this 2013-06-28 15:23:17 [INFO] [STDERR] Exception in thread "Minecraft main thread" java.lang.ExceptionInInitializerError 2013-06-28 15:23:17 [INFO] [STDERR] at net.minecraft.client.Minecraft.a(Minecraft.java:356) 2013-06-28 15:23:17 [INFO] [STDERR] at asq.a(SourceFile:56) 2013-06-28 15:23:17 [INFO] [STDERR] at net.minecraft.client.Minecraft.run(Minecraft.java:746) 2013-06-28 15:23:17 [INFO] [STDERR] at java.lang.Thread.run(Thread.java:722) 2013-06-28 15:23:17 [INFO] [STDERR] Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR$Screen.<init>(XRandR.java:234) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR$Screen.<init>(XRandR.java:196) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR.populate(XRandR.java:87) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR.access$100(XRandR.java:52) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR$1.run(XRandR.java:110) 2013-06-28 15:23:17 [INFO] [STDERR] at java.security.AccessController.doPrivileged(Native Method) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR.getConfiguration(XRandR.java:108) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.LinuxDisplay.init(LinuxDisplay.java:618) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.Display.<clinit>(Display.java:135) 2013-06-28 15:23:17 [INFO] [STDERR] ... 4 more

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • OT: Fixing choppy video playback on OS X

    - by terrencebarr
    This is a bit off-topic but I wanted to share because it seems a lot of people are running into issues with choppy video playback and stutter on Mac OS X. I am using a Mac Mini with Snow Leopard (10.6.8) as a home media center and it has worked great in the past, playing back music and videos from multiple sources (web, quicktime, VLC, EyeTV). A few weeks ago the video playback from all my sources started to become choppy, to stutter, and often the picture would hang for seconds at a time. Totally unusable. Drove me nuts for two weeks. After much research and trial-and-error it turns out the problem was an outdated Flash Player which seems to have messed up the video pipeline for the entire system. The short is, I updated the Flash Player to version 11 directly from the Adobe web site, rebooted the Mac Mini, and all is well again! Judging from the various posts across the web, video playback appears to be a fairly widespread problem for Mac users and I hope this helps some of you out there! And I can’t wait to get rid of Flash altogether – I can’t remember the times it has crashed my browser, hung my system, and screwed up things. Thanks Adobe ;-( Cheers, – Terrence Filed under: Uncategorized Tagged: Adobe Flash, Mac OS X

    Read the article

  • How do I get Google to crawl my content when it's only displayed when you fill in a form?

    - by Sarang Patil
    I have a webpage. It has a form and the "results" section is blank. When the user searches for items, and a list that pops up, he/she chooses one option from list and then the corresponding results are displayed in results section. I once decided to log every ip,url of person with time that visits my page. One ip was 66.249.73.26, and on doing google search I came to know it is ip of google bot. link for whatmyipaddress google bot Now when I searched for the links that this ip visited, it was like this: search?id=100 search?id=110 ... search?id=200 ... then afterwards it incremented in steps of 1, like 400,401.. But people search for strings and not numbers. And because googlebot searches for numbers like this, I think the corresponding content is never displayed and so my page content is never indexed, even though it has rich content. So I want to ask you is that in order to show google bot all the content that the webpage has, should I list all the results in index page and ask users to enter string to filter results?

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • My Apache access log contains weird GET and POST requests, what can I do?

    - by Konstantin
    My Apache access log contains weird GET and POST requests, is it possible to examine which of these are harmful? For example: 114.232.151.185 - - [11/Jun/2014:20:11:33 +0200] "GET http://hotel.qunar.com/render/hoteldiv.jsp?&__jscallback=XQScript_4 HTTP/1.1" 404 1167 103.30.175.10 - - [12/Jun/2014:08:35:17 +0200] "GET /vtigercrm/ HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 1034 94.74.229.110 - - [16/Jun/2014:18:46:43 +0200] "GET http://www.msftncsi.com/ncsi.txt HTTP/1.1" 404 1037 80.73.11.164 - - [20/Jun/2014:01:52:14 +0200] "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.1" 404 1034 162.253.66.76 - - [24/Jun/2014:23:54:30 +0200] "GET /rutorrent HTTP/1.1" 400 226 122.226.223.69 - - [25/Jun/2014:01:14:27 +0200] "GET http://todd0738.gotoip4.com//hello.html HTTP/1.1" 404 1041 My Apache access log file: http://pastebin.com/2x0naQBK

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • PowerBroker (Likewise-Open) + Ubuntu 13.04 -> 13.10 Upgrade

    - by JoBu1324
    I just upgraded Ubuntu from 13.04 to 13.10, and now I can't log into Active Directory; my system is integrated using PowerBroker Identity Services (PBIS), which used to be called Likewise-Open. So far I have identified the following symptoms: I am able to log in with my credentials via ssh. The screen goes black when attempting log into my account via the login screen. I've tried leaving the domain, purging PBIS, and re-installing the latest version of PBIS. I've been trying the troubleshooting section I found here, but I haven't had any success. The relevant portion of the auth.log Oct 22 09:30:26 mypc lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "myusername" Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm-greeter:session): session closed for user lightdm Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm:session): session opened for user myusername by (uid=0) Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm:session): session closed for user myusername Oct 22 09:30:30 mypc lightdm: pam_unix(lightdm-greeter:session): session opened for user lightdm by (uid=0) Oct 22 09:30:30 mypc systemd-logind[718]: New session c5 of user lightdm. Oct 22 09:30:30 mypc lightdm: pam_ck_connector(lightdm-greeter:session): nox11 mode, ignoring PAM_TTY :1 Oct 22 09:30:31 mypc dbus[535]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.129" (uid=110 pid=5139 comm="/usr/lib/x86_64-linux-gnu/indicator-keyboard-servi") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.39" (uid=0 pid=2024 comm="/usr/sbin/console-kit-daemon --no-daemon ") My .xsession-errors log Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. /usr/sbin/lightdm-session: 5: exec: init: not found

    Read the article

  • Cannot mount a CIFS network share over VPN

    - by Aron Rotteveel
    I have setup u VPN connection to our Windows 2008 server at the office and it seems to work fine. For some reason, however, I still am not able to access the network shares over a VPN connection using my standard fstab entries. When I am physically connected to the network, it works fine, but now when trying this over VPN I get the following error: mount error(110): Connection timed out Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) My /etc/fstab looks like this: //server2008/share /mnt/share cifs iocharset=utf8,credentials=/home/aron/.smbcredentials,uid=1000 0 0 As said, it works fine when physically connected, but over VPN it just wont work. Any help is appreciated. EDIT: It seems the Windows firewall is making things harder on me. When I turn it off, I get a bit further, although I still get the following error message: Unable to find suitable address. The strange thing is that I have file sharing added as an exception to the firewall. Port 137-139 and port 445 are open, which should suffice, shouldn't it? EDIT Jan 20th: Still not working. When I have the firewall turned on, it times out. When I turn it off, I get the not suitable address error. Turning the firewall off is not an option, by the way.

    Read the article

  • 1. születésnap - új tartalom

    - by Lajos Sárecz
    Kb. 1 éve, június elején kezdtem a blogot írni. Az elmúlt egy év alatt 3395 egyedi látogatója volt a blognak, a látogatók alkalmanként átlagosan 59 másodpercet töltöttek el itt. Azt is el kell ismerni, hogy elég nagy a visszafordulási arány (83,5%), azaz az idetévedok kevesebb mint ötöde találja hasznosnak a tartalmat. Ennek persze lehet egy fontos oka, hogy az IT kifejezésekre keresések miatt külföldiek számára is feljön a keresokben a blogom, így csak megnyitva a blogot észlelik, hogy nem értenek magyarul :-). Jól mutatja az alábbi térkép is, hogy a világ számos pontjáról idetalálnak (van aki egyébként használ valami webes fordító megoldást a nyelvi akadály leküzdésére), de az is látszik hogy a célközönséget is sikerül elérni, azaz a látogatók zöme Magyarországról érkezik (3505 látogatás az összes 5082-bol). Érdekes, hogy a legtöbb látogató a "vb tippjáték" (110 látogató) kifejezésre talált ide, ezt követi az "oracle junior képzés" (42), majd a "kyte" (40) (nem biztos hogy Tom Kyte-ra gondoltak :-)). Csupán a 4. helyen áll az elso igazi IT keresés: "oracle workflow" (37), majd "oracle enterprise manager" (33). A napi csúcslátogatást (76) is a vb tippjátékra keresoknek köszönheti a blog... Az 1. születésnap örömére próbáltam valami újítást csinálni. A fejléc alatt látható új menüsorba próbálom összeszedni azokat a hasznos oldalakat, amelyek kapcsolódnak a blog témájához, ám esetleg nem annyira egyszeru rátalálni a weben. Tervezem még ezt tovább bovíteni, esetleg egy külön oldalt is létrehozni erre a célra a blog mögött. Remélem ezzel segítem a blog olvasók munkáját.

    Read the article

  • CEO Taken Captive in His Own Factory?

    - by Stephen Slade
    Last Friday was no ordinary day for Chip Starnes, the 42 year old factory owner of Specialty Medical Supplies in China. He recently announced movement of some of the production of their diabetes testing equipment from Beijing to Mumbai India.  Of the 110 employees at the facility, about 80 protested by blocking the doors and refusing to let Chip Starnes out of the facility.  He has been trapped in his office several days now.  The employees think the factory was closing but Mr. Starnes said it was not. Mis-information? Poor communications? Work-stoppage. This is a good example of supply chain disruption. Parked cars are blocking the entrance to the facility, front gates are chained close, the CEO a prisoner in his own factory. Chip Starnes was presented with documents to sign in Chinese indicating he would pay severance and other demands he did not understand, possibly bankrupting the company.    If you depend on supply from China and other foreign suppliers, how reliable are your sources? For example how are the shopfloor employee relations? Is it possible to predict these types of HR risks and plan around them? What are your contingencies? It's important to ask the right questions and hear good answers. Having tools in place to rapidly evaluate, assess and react to these disruptions are the keys to survival. Hear how leading organizations are reinforcing their supply chains and mitigating risk through technology with Oracle's latest release of Oracle Supply Chain Management. Source: WSJ pg.B1, June 25, 2013

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

  • How do I re-enable the backlight?

    - by Scott Severance
    Since Oneiric, if I leave my machine (HP Mini 110 netbook) unattended and it goes into power-save mode, the backlight gets disabled. How can I turn it back on? Note that the keyboard backlight controls (Fn+F4 and Fn+F3) don't have any effect in this situation. I've already filed a bug, but filing a bug doesn't fix my problem. I tried this workaround posted in this bug report dealing with Acer laptops: sudo setpci -s 00:02.0 F4.B=0 However, if anything, that command makes things worse. In the general case, I can see a little bit if I'm in a dark room with a flashlight aimed just so. But after running setpci I can't see anything. And I find the setpci documentation to be utterly incomprehensible, so I don't know whether I need to tweak my command somehow or whether I'm completely barking up the wrong tree. Update: I've found a workaround: I'm now booting with the kernel parameter acpi=off. This disables power management, which prevents the machine from going into power saving mode and thus failing to come back up correctly. Of course, not having power management means that I can't use suspend or do anything to manage power other than powering it off (even then, I have to manually use the power switch). Also, it prevents me from using Unity 3D or Gnome Shell, forcing me into Unity 2C or Gnome Classic. So, I'd really like to be able to stop using this hack.

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • Recovering data from /

    - by Abhijit Gavas
    I accidentally installed Ubuntu to one of my data drives from Windows. The drive was a NTFS drive and contained about 80 GB of important data. The size of the drive is 110 GB. Its new file system is ext4. In an attempt to recover the data, I downloaded foremost and tried the following commands: foremost -i / -o /media/281C8DB01C8D7998/Recovery/ -T -v foremost -i /dev/sda7 -o /media/281C8DB01C8D7998/Recovery/ -T -v (sda7 is the drive in question.) It appears that with either command, foremost gets stuck reading some file. Here is the console output: abhi@abi-PC:/dev$ foremost -i /dev/sda7 -o /media/281C8DB01C8D7998/Recovery/ -T -v Foremost version 1.5.7 by Jesse Kornblum, Kris Kendall, and Nick Mikus Audit File Foremost started at Fri Sep 28 20:58:00 2012 Invocation: foremost -i /dev/sda7 -o /media/281C8DB01C8D7998/Recovery/ -T -v Output directory: /media/281C8DB01C8D7998/Recovery_Fri_Sep_28_20_58_00_2012 Configuration file: /etc/foremost.conf Processing: stdin |------------------------------------------------------------------ File: stdin Start: Fri Sep 28 20:58:00 2012 Length: Unknown Num Name (bs=512) Size File Offset Comment Killed As you can see I have to kill it from system monitor. This approach does not seem to be working. What else could I try to recover the files? Please help. The files are very important and I will be devastated if I cannot recover them.

    Read the article

  • Microeconomical simulation: coordination/planning between self-interested trading agents

    - by Milton Manfried
    In a typical perfect-information strategy game like Chess, an agent can calculate its best move by searching the state tree for the best possible move, while assuming that the opponent will also make the best possible move (i.e. Mini-max). I would like to use this approach in a "game" modeling economic activity, where the possible "moves" would be to buy or sell for a given price, and the goal, rather than a specific class of states (e.g. Checkmate), would be to maximize some function F of the agent's state (e.g. F(money, widget) = 10*money + widget). How to handle buy/sell actions that require coordination between both parties, at the very least agreement upon a price? The cheap way out would be to set the price beforehand, maybe based upon the current supply -- but the idea of this simulation is to examine how prices emerge when freely determined by "perfectly rational" agents. A great example of what I do not want is the trading algorithm in SugarScape -- paraphrasing from Growing Artificial Societies p101-102: when a pair of agents interact to trade, they each compute their internal valuations of the goods, then a bargaining process is conducted and a price is agreed to. If this price makes both agents better off, they complete the transaction The protocol itself is beautiful, but what it cannot capture (as far as I can tell) is the ability for an agent to pay more than it might otherwise for a good, because it knows that it can sell it for even more at a later date -- what appears to be called "strategic thinking" in this pape at Google Books Multi-Agent-Based Simulation III: 4th International Workshop, MABS 2003... to get realistic behavior like that, it seems one would either (1) have to build an outrageously-complex internal valuation system which could at best only cover situations that were planned for at compile-time, or otherwise (2) have some mechanism to search the state tree... which would require some way of planning future trades. Note: The chess analogy only works as far as the state-space search goes; the simulation isn't intended to be "zero sum", so a literal mini-max search wouldn't be appropriate -- and ideally, it should work with more than two agents.

    Read the article

  • Intel N10 graphics

    - by Rapsag1980
    Español: Buen día. Instalé en una notebook ubuntu 12.04 pero me da el problema que solamente me da dos resoluciones de pantallas 800x600 y 1024x768... En la primera se ve muy grotesca la pantalla y en la segunda se ve bien, pero falta un pedazo de pantalla arriba y abajo... He tratado de buscar información sobre el tema pero parece uno de esos "bugs" que no han conseguido ser erradicados... Intenté hacer el Xorg.conf y esas cosas y nomas no se puede... Recurro a su sapiencia y experiencia en este tipo de problemas... La mini es una Lanix Neuron lt, procesador intel atom n450 y la tarjeta Intel corporation N10 family integrated graphics controller.... Inglés: Good day. I installed ubuntu 12.04 on a notebook but I get the problem that only gives me two screen resolutions of 800x600 and 1024x768 ... The first screen looks very grotesque and the second looks good, but missing a piece of screen up and down ... I tried to find information on the subject but it seems one of those "bugs" that have failed to be eradicated ... I tried to do the Xorg.conf nomas and stuff and you can not ... I appeal to your wisdom and experience in this kind of problem ... The mini is a Neuron Lanix lt, Intel Atom N450 processor and the Intel integrated graphics family corporation N10 controller ....

    Read the article

  • Tracking subdomains in the same profile as the main domain

    - by Osvaldo
    I have a site, let's call it http://www.example.com with a non-universal Google analytics account. Now we have to add new functionalities in a subdomain like https://subdomain.example.com as a micro site. On that subdomain the URL's will be something like https://subdomain.example.com?param1=foo&param2=bar We can't change the requirements as both main site and mini-site use a different CMS/application. This is strictly a Google Analytics question. But we need to count pageviews and events that happen in that subdomain (with URLs like https://subdomain.example.com?param1=foo&param2=bar) as belonging to the main domain. So pageviews and events in https://subdomain.example.com?param1=foo&param2=bar need to be recorded as if they happened in http://www.example.com/path/to/whatever/I/want Fortunately we have full control on JavaScript in the main domain site and in the subdomain site too. How can we make this work? Do we need to change tracking code both in the main domain and subdomains? Do we need to reconfigure Google Analytics? Please note again that we do not want to create a new view for the subdomain. Both mini-site and main site should be in the same account, property and view.

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

  • What Is The Formula for the 3 of 9 Bar Code Alphabet?

    - by Chris Moschini
    Background: 3 of 9 Barcode Alphabet A simple syntax for 3 of 9 bar codes What is the formula behind the alphabet and digits in a 3 of 9 bar code? For example, ASCII has a relatively clear arrangement. Numbers start at 33, capitals at 65, lowercase at 97. From these starting points you can infer the ASCII code for any number or letter. The start point for each range is also a multiple of 32 + 1. Bar codes seem random and lacking sequence. If we use the syntax from the second link, this is the first six characters in 3 of 9: A 100-01 B 010-01 C 110-00 D 001-01 E 101-00 F 011-00 I see no pattern here; what is it? I'm as much interested in the designer's intended pattern behind these as I am in someone devising an algorithm of their own that can give you the above code for a given character based on its sequence. I struggled with where to put this question; is it history, computer science, information science? I chose Programmers because a StackExchange search had the most barcode hits here, and because I wanted to specifically relate it to ASCII to explain what sort of formula/explanation I'm looking for.

    Read the article

  • Sub-systems in game engines

    - by Hillel
    So here's the problem- I'm writing my own engine library, and it works fine with stuff like menus and the actual game screen. The thing is, I can't really figure out how to integrate something like an intro or dialogue preceding certain levels into this system. Let's look at another example- say I have a game-specific engine which gets a Level object and runs it. Engine would have its own collision and physics system, all hard coded. Now, what if at some point in a level, I want the player to enter a mini-game with different rules? How do I morph the Engine class to support these sub-systems without having to deal with their code all the time (as in: if(regular game) ... else if(mini game) ...)? And what if I want an intro animation at the start of a level, and I want the player to be able to assume control of his character once the animation ends, do I implement the animation into the Engine class itself? Or maybe I need to run another class, CutScene, and when it ends, it calls Engine and starts the level? What if I want to add a dialogue system, where at the start of each level there's a short dialogue and the player can't control his character, and once it ends, he can? Would I then run the dialogue code inside the Engine code? Maybe these sub-systems should all be scripted? I don't know anything about scripting, is it necessary for this kind of situation? Any help would be appreciated.

    Read the article

  • Location Change Salary Differences [closed]

    - by GameDev
    DISCLAIMER: I know that this might be a "regional" question but I'm also asking for help as far as what resources to use to determine my decision. I'm currently talking to a recruiter for a game developer in the SF Bay area. I work in a relatively low-cost area in the south. I really want to get into game development but my current career is general web development. I'm very interested in taking the job, but my concern is that the amount they're willing to pay might be a relative pay cut. Here are some factors: It's not an entry-level position, the title is Senior Software Engineer. I have 5+ years of experience. The calculators online tell me that I should be expecting around 2x my current pay rate(http://www.bestplaces.net/col/). My current pay is in the mid $60k/yr, so that's like 120-130k. The recruiter told me at my experience level I can expect about $90-100/yr, and that those cost of living calculators were way off. The benefits will definitely be better, it's much larger company (help with commuting, catered meals, etc). But is the recruiter trying to give me a snow job on the pay scale, or is that a reasonable change from a smallish town in the south to somewhere in the SF bay area? How can I find this out? Glassdoor and Payscale seem to say "senior software developers" in that area make around 110 in median salary, but Payscale says it's closer to $135k, that range seems pretty large.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >