Search Results

Search found 1748 results on 70 pages for 'branch prediction'.

Page 63/70 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Bazaar offline + branches

    - by cheez
    I have a Bazaar repository on Host A with multiple branches. This is my main repository. Until now, I have been doing checkouts on my other machines and committing directly to the main repository. However, now I am consolidating all my work to my laptop and multiple VMs. I need to be working offline regularly. In particular, I need to create/delete/merge branches all while offline. I was thinking of continuing to have the master on Host A with a clone of the repository on the laptop with each vms doing checkouts of the clone. Then, when I go offline, I could do bzr unbind on the clone and bzr bind when I am back online. This failed as soon as I tried to bzr clone since bzr clone only clones a branch(!!!!) I need some serious help. If Hg would handle this better please let me know (I need Windows support.) However, at this moment I cannot switch from Bazaar as it is too close to some important deadlines. Thanks in advance!

    Read the article

  • Expressionengine 2 and git (version control)

    - by Danny
    Hey guys I’m looking to move over to using git to make my EE development a lot easier and more manageable. I’m already aware of the guides posted on devotee and a few othersites but after scanning over them they seem a little old and seem to be specifically for ee 1.x, I was wondering if anyone had been successful with ee 2. I’ve only recently made the transition from svn to git, previously I found that using ee via svn was a ballache, so many confit conflicts, wrong urls, and all versions of the site were using the same database. I’m basically looking for the best or should I say the ideal way to setup both git and ee to work in harmony together. I’d like to also learn how to branch other sites I develop with ee from this too, if anyone has experience with this that’d be great! Also if it’s any use I’m hosted by dreamhost, As far as I understand they support git, I’ve looked over their knowledge base on how best to set things up, would anyone reccomend their way of doing things? And has anyone had a successful experience whilst doing so?  I look forward to hearing your responses! Thanks Sent from my iPhone, whilst falling asleep so excuse the possible typos!a

    Read the article

  • Etiquette: Version bump my fork of opensource project?

    - by Ross
    This question is about etiquette and open source projects. I have forked an application from github and added two new features. The first feature has been request frequently elsewhere. I have added it. Code & implementation are clean (I think). The second feature is more of a hack. It will be of use to others, but the implementation is a little dirty in useage and more so in code. I need the feature but I don't have the skills to fully implement it properly or to a level that could be considered a worth while contrabution to the main project. How should the versioning work? Do I just bump up my version numbers care-free and push to my master branch? It is annoying to know which version is running, modifed or original, as both have the same version number. But will it be confusing when, months later, my github page has a version number the same as the original but both are actually completely different. (I have made pull requests etc. but that is not the context of my question.) The project I have forked uses ruby jeweler so has a versioning format of: Jeweler tracks the version of your project. It assumes you will be using a version in the format x.y.z. x is the 'major' version, y is the 'minor' version, and z is the patch version. Is this standard for other projects/langauges too? Are my changes patches? Thanks

    Read the article

  • WPF app startup problems

    - by Dave
    My brain is all over the map trying to fully understand Unity right now. So I decided to just dive in and start adding it in a branch to see where it takes me. Surprisingly enough (or maybe not), I am stuck just getting my darn Application to load properly. It seems like the right way to do this is to override OnStartup in App.cs. I've removed my StartupUri from App.xaml so it doesn't create my GUI XAML. My App.cs now looks something like this: public partial class App : Application { private IUnityContainer container { get; set; } protected override void OnStartup(StartupEventArgs e) { container = new UnityContainer(); GUI gui = new GUI(); gui.Show(); } protected override void OnExit(ExitEventArgs e) { container.Dispose(); base.OnExit(e); } } The problem is that nothing happens when I start the app! I put a breakpoint at the container assignment, and it never gets hit. What am I missing? App.xaml is currently set to ApplicationDefinition, but I'd expect this to work because some sample Unity + WPF code I'm looking at (from Codeplex) does the exact same thing, except that it works! I've also started the app by single-stepping, and it eventually hits the first line in App.xaml. When I step into this line, that's when the app just starts "running", but I don't see anything (and my breakpoint isn't hit). If I do the exact same thing in the sample application, stepping into App.xaml puts me right into OnStartup, which is what I'd expect to happen. Argh! Is it a Bad Thing to just put the Unity construction in my GUI's Window_Loaded event handler? Does it really need to be at the App level?

    Read the article

  • Scala always returning true....WHY?

    - by jhamm
    I am trying to learn Scala and am a newbie. I know that this is not optimal functional code and welcome any advice that anyone can give me, but I want to understand why I keep getting true for this function. def balance(chars: List[Char]): Boolean = { val newList = chars.filter(x => x.equals('(') || x.equals(')')); return countParams(newList, 0) } def countParams(xs: List[Char], y: Int): Boolean = { println(y + " right Here") if (y < 0) { println(y + " Here") return false } else { println(y + " Greater than 0") if (xs.size > 0) { println(xs.size + " this is the size") xs match { case xs if (xs.head.equals('(')) => countParams(xs.tail, y + 1) case xs if (xs.head.equals(')')) => countParams(xs.tail, y - 1) case xs => 0 } } } return true; } balance("()())))".toList) I know that I am hitting the false branch of my if statement, but it still returns true at the end of my function. Please help me understand. Thanks.

    Read the article

  • Is this a situation where I should "hg push -f"?

    - by user144182
    I have two machines, A and B that both access an external hg repository. I did some development on A, wasn't ready to push changesets to the external, and needed to switch machines, so I pushed the changesets to B using hg serve. Changesets continued on B, were committed and then pushed to external repo. I then pulled on A and updated to default/tip. This left the local changesets that had previously been pushed to B as a branch, but because of how I pushed things around, the changes in the local changesets are already in default/tip. I've now continued to make changes and commit locally on A, but when I try to push hg asks me to merge or do push -f instead. I know push -f is almost never recommended. This situation is close to one where I should use rebase, however the changesets that would be "rebased" I don't really need locally or in the external repository since they are already effectively in default/tip via the push to B. Now, I know I could merge with the latest local changeset and just discard the changes, but then I would still have to commit the merge which gets me back into rebase territory. Is this a case where I could do hg push -f? Also, why would pushing from A create remote heads if I've updated to default/tip before I continued to commit changesets?

    Read the article

  • Java: refactoring static constants

    - by akf
    We are in the process of refactoring some code. There is a feature that we have developed in one project that we would like to now use in other projects. We are extracting the foundation of this feature and making it a full-fledged project which can then be imported by its current project and others. This effort has been relatively straight-forward but we have one headache. When the framework in question was originally developed, we chose to keep a variety of constant values defined as static fields in a single class. Over time this list of static members grew. The class is used in very many places in our code. In our current refactoring, we will be elevating some of the members of this class to our new framework, but leaving others in place. Our headache is in extracting the foundation members of this class to be used in our new project, and more specifically, how we should address those extracted members in our existing code. We know that we can have our existing Constants class subclass this new project's Constants class and it would inherit all of the parent's static members. This would allow us to effect the change without touching the code that uses these members to change the class name on the static reference. However, the tight coupling inherent in this choice doesn't feel right. before: public class ConstantsA { public static final String CONSTANT1 = "constant.1"; public static final String CONSTANT2 = "constant.2"; public static final String CONSTANT3 = "constant.3"; } after: public class ConstantsA extends ConstantsB { public static final String CONSTANT1 = "constant.1"; } public class ConstantsB { public static final String CONSTANT2 = "constant.2"; public static final String CONSTANT3 = "constant.3"; } In our existing code branch, all of the above would be accessible in this manner: ConstantsA.CONSTANT2 I would like to solicit arguments about whether this is 'acceptable' and/or what the best practices are.

    Read the article

  • Is there an algorithm for finding an item that matches certain properties, like a 20 questions game?

    - by lala
    A question about 20 questions games was asked here: However, if I'm understanding it correctly, the answers seem to assume that each question will go down a hierarchal branching tree. A binary tree should work if the game went like this: Is it an animal? Yes. Is it a mammal? Yes. Is it a feline? Yes. Because feline is an example of a mammal and mammal is an example of an animal. But what if the questions go like this? Is it a mammal? Yes. Is it a predator? Yes. Does it have a long nose? No. You can't branch down a tree with those kinds of questions, because there are plenty of predators that aren't mammals. So you can't have your program just narrow it down to mammal and have predators be a subset of mammals. So is there a way to use a binary search tree that I'm not understanding or is there a different algorithm for this problem?

    Read the article

  • Generating a python file

    - by fema
    Hi! I'm havin issues with python (imo python itself is an issue). I have a txt file, it contains a custom language and I have to translate it to a working python code. The input: import sys n = int(sys.argv[1]) ;;print "Beginning of the program!" LOOP i in range(1,n) {print "The number:";;print i} BRANCH n < 5 {print n ;;print "less than 5"} The wanted output looks exactly like this: import sys n = int(sys.argv[1]) print "Beginning of the program!" for i in range(1,n) : print "The number:" print i if n < 5 : print n print "less than 5" The name of the input file is read from parameter. The out file is out.py. In case of a wrong parameter, it gives an error message. The ;; means a new line. When I tried to do it, I made an array, I read all the lines into it, split by " ". Then I wanted to strip it from the marks I don't need. I made 2 loops, one for the lines, one for the words. So then I started to replace the things. Everything went fine until it came to the } mark. It finds it, but it can not replace or strip it. I have no more idea what to do. Could someone help me, please? Thanks in advance!

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Assigning a value to an integer in a C linked list

    - by Drunk On Java
    Hello all. I have a question regarding linked lists. I have the following structs and function for example. struct node { int value; struct node *next; }; struct entrynode { struct node *first; struct node *last; int length; }; void addnode(struct entrynode *entry) { struct node *nextnode = (struct node *)malloc(sizeof(struct node)); int temp; if(entry->first == NULL) { printf("Please enter an integer.\n"); scanf("%d", &temp); nextnode->value = temp; nextnode->next = NULL; entry->first = nextnode; entry->last = nextnode; entry->length++; } else { entry->last->next = nextnode; printf("Please enter an integer.\n"); scanf("%d", nextnode->value); nextnode->next = NULL; entry->last = nextnode; entry->length++; } } In the first part of the if statement, I store input into a temp variable and then assign that to a field in the struct. The else branch, I tried to assign it directly which did not work. How would I go about assigning it directly? Thanks for your time.

    Read the article

  • How can I change the default startup directory for cmd.exe?

    - by Nano HE
    Hi. My Procedure last day as below Click Start, Run and type Regedit.exe Navigate to the following branch: HKEY_CURRENT_USER \ Software \ Microsoft \ Command Processor In the right-pane, double-click Autorun and set the startup folder path as its data, preceded by “CD /d “. If Autorun value is missing, you need to create one, of type REG_EXPAND_SZ or REG_SZ in the above location. Example: To set the startup directory to D:\learning\perl, set the Autorun value data to CD /d D:\learning\perl Then I clicked Start, run and type cmd. It successfully. I could do perl practice more conveniently now. But today, I find when I try to build my Visual Studio 2005 solution which included some Pre-build event Command like this: perl.exe MyAppVersion.pl perl.exe AttrScan.pl It doesn't work. Show error: can't find the path. I check the environment variable setting and find the variable-path and it's value-c:\perl\bin\; still exist. Finially, I try to removed the Regedit.exe configuration "Autorun" value and test again. The issue fixed. I only changed the default startup directory for cmd.exe command. Why the pre-build event perl command was impacted? (I am using winxp and activePerl 5.8)

    Read the article

  • how to trigger a script located on a machine in one domain from a machine on another domain

    - by user326814
    Hi, I am basically from QA. What we testers do each day is 1. Open a web browser. Type in http://11.12.13.27.8080/cruisecontrol (since we are in a particular network, only we can access this) 2. Check if the latest nightly build has been successful. If it is successful, deploy it on a test environment by clicking on 'Deploy this build' link. This deploying takes around 1-1.5 hours. During this time we cannot use our machines to work on anything else. Only after this deploying can we begin to test. Now, i wanted to know if its possible to do the below. When at home in the morning, i use something which will trigger a script (which will be on my machine at workplace). This script will inturn automatically deploy the build. I already have such a similar script. What i want to know is how is it possible to trigger this script from my home machine? Is it even possible? For e.g the external trigger will say "Deploy xxx branch on yyy test environment". So the script on my workplace machine will be invoked and it will automatically deploy it before i actually come to my desk. Please help. I am from QA and have no idea about all this.

    Read the article

  • Remove file from history completely

    - by Iain
    A colleague has done a few things I told them not to do: forked the origin repo online cloned the fork, added a file that shouldn't have been added to that local repo pushed this to their fork I've then: merged the changes from the fork and found the file I want to remove this from: my local repo the fork their local repo I have a solution for removing something from the history, taken from Remove file from git repository (history). What I need to know is, should my colleague also go through this, and will a subsequent push remove all info from the fork? (I'd like an alternative to just destroying the fork, as I'm not sure my colleague will do this) SOLUTION: This is the shortest way to get rid of the files: check .git/packed-refs - my problem was that I had there a refs/remotes/origin/master line for a remote repository, delete it, otherwise git won't remove those files (optional) git verify-pack -v .git/objects/pack/#{pack-name}.idx | sort -k 3 -n | tail -5 - to check for the largest files (optional) git rev-list --objects --all | grep a0d770a97ff0fac0be1d777b32cc67fe69eb9a98 - to check what files those are git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_names' - to remove the file from all revisions rm -rf .git/refs/original/ - to remove git's backup git reflog expire --all --expire='0 days' - to expire all the loose objects (optional) git fsck --full --unreachable - to check if there are any loose objects git repack -A -d - repacking the pack git prune - to finally remove those objects

    Read the article

  • NHibernate: Using value tables for optimization AND dynamic join

    - by Kostya
    Hi all, My situation is next: there are to entities with many-to-many relation, f.e. Products and Categories. Also, categories has hierachial structure, like a tree. There is need to select all products that depends to some concrete category with all its childs (branch). So, I use following sql statement to do that: SELECT * FROM Products p WHERE p.ID IN ( SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 ) But with big set of data this query executes very long and I found some solution to optimize it, look at here: DECLARE @CatProducts TABLE ( ProductID int NOT NULL ) INSERT INTO @CatProducts SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 SELECT * FROM Products p INNER JOIN @CatProducts cp ON cp.ProductID = p.ID This query executes very fast but I don't know how to do that with NHIbernate. Note, that I need use only ICriteria because of dynamic filtering\ordering. If some one knows a solution for that, it will be fantastic. But I'll pleasure to any suggestions of course. Thank you ahead, Kostya

    Read the article

  • Do Distributed Version Control Systems promote poor backup habits?

    - by John
    In a DVCS, each developer has an entire repository on their workstation, to which they can commit all their changes. Then they can merge their repo with someone else's, or clone it, or whatever (as I understand it, I'm not a DVCS user). To me that flags a side-effect, of being more vulnerable to forgetting to backup. In a traditional centralised system, both you as a developer and the people in charge know that if you commit something, it's held on a central server which can have decent backup solutions in place. But using a DVCS, it seems you only have to push your work to a server when you feel like sharing it. It's all very well you have the repo locally so you can work on your feature branch for a month without bothering anyone, but it means (I think) that checking in your code to the repo is not enough, you have to remember to do regular pushes to a backed-up server. It also means, doesn't it, that a team lead can't see all those nice SVN commit emails to keep a rough idea what's going on in the code-base? Is any of this a real issue?

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • C# Neural Networks with Encog

    - by JoshReuben
    Neural Networks ·       I recently read a book Introduction to Neural Networks for C# , by Jeff Heaton. http://www.amazon.com/Introduction-Neural-Networks-C-2nd/dp/1604390093/ref=sr_1_2?ie=UTF8&s=books&qid=1296821004&sr=8-2-spell. Not the 1st ANN book I've perused, but a nice revision.   ·       Artificial Neural Networks (ANNs) are a mechanism of machine learning – see http://en.wikipedia.org/wiki/Artificial_neural_network , http://en.wikipedia.org/wiki/Category:Machine_learning ·       Problems Not Suited to a Neural Network Solution- Programs that are easily written out as flowcharts consisting of well-defined steps, program logic that is unlikely to change, problems in which you must know exactly how the solution was derived. ·       Problems Suited to a Neural Network – pattern recognition, classification, series prediction, and data mining. Pattern recognition - network attempts to determine if the input data matches a pattern that it has been trained to recognize. Classification - take input samples and classify them into fuzzy groups. ·       As far as machine learning approaches go, I thing SVMs are superior (see http://en.wikipedia.org/wiki/Support_vector_machine ) - a neural network has certain disadvantages in comparison: an ANN can be overtrained, different training sets can produce non-deterministic weights and it is not possible to discern the underlying decision function of an ANN from its weight matrix – they are black box. ·       In this post, I'm not going to go into internals (believe me I know them). An autoassociative network (e.g. a Hopfield network) will echo back a pattern if it is recognized. ·       Under the hood, there is very little maths. In a nutshell - Some simple matrix operations occur during training: the input array is processed (normalized into bipolar values of 1, -1) - transposed from input column vector into a row vector, these are subject to matrix multiplication and then subtraction of the identity matrix to get a contribution matrix. The dot product is taken against the weight matrix to yield a boolean match result. For backpropogation training, a derivative function is required. In learning, hill climbing mechanisms such as Genetic Algorithms and Simulated Annealing are used to escape local minima. For unsupervised training, such as found in Self Organizing Maps used for OCR, Hebbs rule is applied. ·       The purpose of this post is not to mire you in technical and conceptual details, but to show you how to leverage neural networks via an abstraction API - Encog   Encog ·       Encog is a neural network API ·       Links to Encog: http://www.encog.org , http://www.heatonresearch.com/encog, http://www.heatonresearch.com/forum ·       Encog requires .Net 3.5 or higher – there is also a Silverlight version. Third-Party Libraries – log4net and nunit. ·       Encog supports feedforward, recurrent, self-organizing maps, radial basis function and Hopfield neural networks. ·       Encog neural networks, and related data, can be stored in .EG XML files. ·       Encog Workbench allows you to edit, train and visualize neural networks. The Encog Workbench can generate code. Synapses and layers ·       the primary building blocks - Almost every neural network will have, at a minimum, an input and output layer. In some cases, the same layer will function as both input and output layer. ·       To adapt a problem to a neural network, you must determine how to feed the problem into the input layer of a neural network, and receive the solution through the output layer of a neural network. ·       The Input Layer - For each input neuron, one double value is stored. An array is passed as input to a layer. Encog uses the interface INeuralData to hold these arrays. The class BasicNeuralData implements the INeuralData interface. Once the neural network processes the input, an INeuralData based class will be returned from the neural network's output layer. ·       convert a double array into an INeuralData object : INeuralData data = new BasicNeuralData(= new double[10]); ·       the Output Layer- The neural network outputs an array of doubles, wraped in a class based on the INeuralData interface. ·        The real power of a neural network comes from its pattern recognition capabilities. The neural network should be able to produce the desired output even if the input has been slightly distorted. ·       Hidden Layers– optional. between the input and output layers. very much a “black box”. If the structure of the hidden layer is too simple it may not learn the problem. If the structure is too complex, it will learn the problem but will be very slow to train and execute. Some neural networks have no hidden layers. The input layer may be directly connected to the output layer. Further, some neural networks have only a single layer. A single layer neural network has the single layer self-connected. ·       connections, called synapses, contain individual weight matrixes. These values are changed as the neural network learns. Constructing a Neural Network ·       the XOR operator is a frequent “first example” -the “Hello World” application for neural networks. ·       The XOR Operator- only returns true when both inputs differ. 0 XOR 0 = 0 1 XOR 0 = 1 0 XOR 1 = 1 1 XOR 1 = 0 ·       Structuring a Neural Network for XOR  - two inputs to the XOR operator and one output. ·       input: 0.0,0.0 1.0,0.0 0.0,1.0 1.0,1.0 ·       Expected output: 0.0 1.0 1.0 0.0 ·       A Perceptron - a simple feedforward neural network to learn the XOR operator. ·       Because the XOR operator has two inputs and one output, the neural network will follow suit. Additionally, the neural network will have a single hidden layer, with two neurons to help process the data. The choice for 2 neurons in the hidden layer is arbitrary, and often comes down to trial and error. ·       Neuron Diagram for the XOR Network ·       ·       The Encog workbench displays neural networks on a layer-by-layer basis. ·       Encog Layer Diagram for the XOR Network:   ·       Create a BasicNetwork - Three layers are added to this network. the FinalizeStructure method must be called to inform the network that no more layers are to be added. The call to Reset randomizes the weights in the connections between these layers. var network = new BasicNetwork(); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(1)); network.Structure.FinalizeStructure(); network.Reset(); ·       Neural networks frequently start with a random weight matrix. This provides a starting point for the training methods. These random values will be tested and refined into an acceptable solution. However, sometimes the initial random values are too far off. Sometimes it may be necessary to reset the weights again, if training is ineffective. These weights make up the long-term memory of the neural network. Additionally, some layers have threshold values that also contribute to the long-term memory of the neural network. Some neural networks also contain context layers, which give the neural network a short-term memory as well. The neural network learns by modifying these weight and threshold values. ·       Now that the neural network has been created, it must be trained. Training a Neural Network ·       construct a INeuralDataSet object - contains the input array and the expected output array (of corresponding range). Even though there is only one output value, we must still use a two-dimensional array to represent the output. public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } };   public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } };   INeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL); ·       Training is the process where the neural network's weights are adjusted to better produce the expected output. Training will continue for many iterations, until the error rate of the network is below an acceptable level. Encog supports many different types of training. Resilient Propagation (RPROP) - general-purpose training algorithm. All training classes implement the ITrain interface. The RPROP algorithm is implemented by the ResilientPropagation class. Training the neural network involves calling the Iteration method on the ITrain class until the error is below a specific value. The code loops through as many iterations, or epochs, as it takes to get the error rate for the neural network to be below 1%. Once the neural network has been trained, it is ready for use. ITrain train = new ResilientPropagation(network, trainingSet);   for (int epoch=0; epoch < 10000; epoch++) { train.Iteration(); Debug.Print("Epoch #" + epoch + " Error:" + train.Error); if (train.Error > 0.01) break; } Executing a Neural Network ·       Call the Compute method on the BasicNetwork class. Console.WriteLine("Neural Network Results:"); foreach (INeuralDataPair pair in trainingSet) { INeuralData output = network.Compute(pair.Input); Console.WriteLine(pair.Input[0] + "," + pair.Input[1] + ", actual=" + output[0] + ",ideal=" + pair.Ideal[0]); } ·       The Compute method accepts an INeuralData class and also returns a INeuralData object. Neural Network Results: 0.0,0.0, actual=0.002782538818034049,ideal=0.0 1.0,0.0, actual=0.9903741937121177,ideal=1.0 0.0,1.0, actual=0.9836807956566187,ideal=1.0 1.0,1.0, actual=0.0011646072586172778,ideal=0.0 ·       the network has not been trained to give the exact results. This is normal. Because the network was trained to 1% error, each of the results will also be within generally 1% of the expected value.

    Read the article

  • CodePlex Daily Summary for Monday, November 22, 2010

    CodePlex Daily Summary for Monday, November 22, 2010Popular ReleasesSQL Monitor: SQLMon 1.1: changes: 1.added sql job monitoring; 2.added settings save/loadASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6DotSpatial: DotSpatial 11-21-2010: This release introduces the following Fixed bugs related to dispose, which caused issues when reordering layers in the legend Fixed bugs related to assigning categories where NULL values are in the fields New fast-acting resize using a bitmap "prediction" of what the final resize content will look like. ImageData.ReadBlock, ImageData.WriteBlock These allow direct file access for reading or writing a rectangular window. Bitmaps are used for holding the values. Removed the need to stor...Minemapper - dynamic mapping for Windows: Minemapper v0.1.0: Pan by: dragging the mouse using the buttons Zoom by: scrolling the mouse wheel using the buttons using the slider Night support Biome support Skylight support Direction support: East West Height slicingMDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...Smith Html Editor: Smith Html Editor V0.75: The first public release.MiniTwitter: 1.59: MiniTwitter 1.59 ???? ?? User Streams ????????????????? ?? ?????????????? ???????? ?????????????.NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.01: Added new extensions for - object.CountLoopsToNull Added new extensions for DateTime: - DateTime.IsWeekend - DateTime.AddWeeks Added new extensions for string: - string.Repeat - string.IsNumeric - string.ExtractDigits - string.ConcatWith - string.ToGuid - string.ToGuidSave Added new extensions for Exception: - Exception.GetOriginalException Added new extensions for Stream: - Stream.Write (overload) And other new methods ... Release as of dotnetpro 01/2011Code Sample from Microsoft: Visual Studio 2010 Code Samples 2010-11-19: Code samples for Visual Studio 2010Prism Training Kit: Prism Training Kit 4.0: Release NotesThis is an updated version of the Prism training Kit that targets Prism 4.0 and added labs for some of the new features of Prism 4.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication MEF Navigation Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2...Free language translator and file converter: Free Language Translator 2.2: Starting with version 2.0, the translator encountered a major redesign that uses MEF based plugins and .net 4.0. I've also fixed some bugs and added support for translating subtitles that can show up in video media players. Version 2.1 shows the context menu 'Translate' in Windows Explorer on right click. Version 2.2 has links to start the media file with its associated subtitle. Download the zip file and expand it in a temporary location on your local disk. At a minimum , you should uninstal...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.4 Released: Hi, Today we are releasing Visifire 3.6.4 with few bug fixes: * Multi-line Labels were getting clipped while exploding last DataPoint in Funnel and Pyramid chart. * ClosestPlotDistance property in Axis was not behaving as expected. * In DateTime Axis, Chart threw exception on mouse click over PlotArea if there were no DataPoints present in Chart. * ToolTip was not disappearing while changing the DataSource property of the DataSeries at real-time. * Chart threw exception ...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.Craig's Utility Library: Craig's Utility Library Code 2.0: This update contains a number of changes, added functionality, and bug fixes: Added transaction support to SQLHelper. Added linked/embedded resource ability to EmailSender. Updated List to take into account new functions. Added better support for MAC address in WMI classes. Fixed Parsing in Reflection class when dealing with sub classes. Fixed bug in SQLHelper when replacing the Command that is a select after doing a select. Fixed issue in SQL Server helper with regard to generati...MFCMAPI: November 2010 Release: Build: 6.0.0.1023 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeDotNetNuke® Community Edition: 05.06.00: Major HighlightsAdded automatic portal alias creation for single portal installs Updated the file manager upload page to allow user to upload multiple files without returning to the file manager page. Fixed issue with Event Log Email Notifications. Fixed issue where Telerik HTML Editor was unable to upload files to secure or database folder. Fixed issue where registration page is not set correctly during an upgrade. Fixed issue where Sendmail stripped HTML and Links from emails...mVu Mobile Viewer: mVu Mobile Viewer 0.7.10.0: Tube8 fix.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.8.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the serverNew Features Improved chart support Different chart-types series on the same chart Support for secondary axis and a lot of new properties Better styling Encryption and Workbook protection Table support Import csv files Array formulas ...and a lot of bugfixesNew Projects.NET 4 Workflow Activities for Citrix: .NET 4 based workflow activities targeting the Citrix infrastructure.Age calculator: It calculates the age of a person in days on specification of date of birth.Another Azure Demo Project: An Azure demo project - based on the one we (Johan Danforth and Dag König) showed on the Swedish Azure Summit.ASP.NET Layered Web Application: N-Layered Web Applications with ASP.NET based on the article by Imar Spaanjaars.Binzlog: Donet ????。Build Solution: Buid Visual Studio applications with .Net code.CondominioOnline: Projeto para o desenvolvimento colaborativo dos diagramas de desenvolvimento.Create Dynamic UI with WPF: Create Dynamic UI with WPFDNN Fanbox: dot net nuke plugin facebook fanboxDNN Tweet: DNN Tweet is a twitter plugin for DotnetNuke DotNetNuke Notes: dnnNotes allows you to create simple notes that are stored on your DotNetNuke site.Easy Login PHP Script: Give your site a professional looking Members Area with this completely FREE and easy-to-use PHP script! Developed in PHP and uses MySQL as a database backend. Go on, click here, you know you want to! :DFind Nigerian Traditional Fashion Styles: NaijaTradStyles is a social network for Nigerians all over the world to promote the Nigerian economy, designs and cultures, fashion designers and individuals. This site allows users to share fashion ideas, activities, events, and interests within their individual networks. The GreenArrow: Just a simple mark-locate-click automation tool by comparing graphic pieces. GreenArrow makes it easier for automation script writer to handle UI elements which cannot be located by normal methods, like keyword or classid. Libero API for Fusion Charts in ASP.Net: Libero.FusionChartsAPI is made for Asp.Net (Webforms and MVC) developers to make easier to implement Fusion Charts in their projects. It is developed in framework .Net 4 (but supports framework 3.5) to target ASP.Net projects. Minemapper - dynamic mapping for Windows: Minemapper is an interactive, dynamic mapper for Minecraft. It uses mcmap to generate small map image tiles, then lets you pan and zoom around, quickly generating new tiles as needed.MoodleAzure: Enable Moodle 1.9.9 to run on Windows Azure and SQL AzureOpalis Active Directory Extension: A Opalis Integration Pack Project for Active Directory Integration. Done with C# Directory Services.Quick Finger SDK: Quick Finger SDK helps you to build a wide range of applications to use fingerprint recognition. Quick Finger SDK makes it easier for developers to integrate fingerprint recognition into their software. It's developed in Visual C++. Regex Batch Replacer (Multi-File): Regex Batch Replacer uses regular expression to find and replace text in multiple files.RiverRaid X: A clone of the classic Atari 2600 arcade game, River Raid. Uses XNA 4.0 and Neat game engine (http://neat.codeplex.com)SharePoint Commander: SharePoint 2010 administrative tool for developers and administrators.StreamerMatch: A tool for streamers, focused at Starcraft II at the moment.Tab Web Part: This solution is used to present the WebParts in a tab like user interface. It is tested on a SharePoint 2010 sandboxed solution. With this solution, all the WebParts added in a particular zone will appear in a tab kind of interface in the design mode. The javascript transformsTomato: XNA-based rendering middleware.UnicornObjects: todoVina: VinaWPF Photo/Image Manager: A WPF playground for many projects, including an image viewer, filters, image modification, photo organization, etc.WXQCW: wxqcw news platformYobbo Guitar: Yobbo guitar is a web application developed in ASP.NET that allows users to share guitar songs and chord progressions.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Android: How/where to put gesture code into IME?

    - by CardinalFIB
    Hi, I'm new to Android but I'm trying to create an IME that allows for gesture-character recognition. I can already do simple apps that perform gesture recognition but am not sure where to hook in the gesture views/obj with an IME. Here is a starting skeleton of what I have for the IME so far. I would like to use android.gesture.Gesture/Prediction/GestureOverlayView/OnGesturePerformedListener. Does anyone have advice? -- CardinalFIB gestureIME.java public class gestureIME extends InputMethodService { private static Keyboard keyboard; private static KeyboardView kView; private int lastDisplayWidth; @Override public void onCreate() { super.onCreate(); } @Override public void onInitializeInterface() { int displayWidth; if (keyboard != null) { displayWidth = getMaxWidth(); if (displayWidth == lastDisplayWidth) return; else lastDisplayWidth = getMaxWidth(); } keyboard = new GestureKeyboard(this, R.xml.keyboard); } @Override public View onCreateInputView() { kView = (KeyboardView) getLayoutInflater().inflate(R.layout.input, null); kView.setOnKeyboardActionListener(kListener); kView.setKeyboard(keyboard); return kView; } @Override public View onCreateCandidatesView() { return null; } @Override public void onStartInputView(EditorInfo attribute, boolean restarting) { super.onStartInputView(attribute, restarting); kView.setKeyboard(keyboard); kView.closing(); //what does this do??? } @Override public void onStartInput(EditorInfo attribute, boolean restarting) { super.onStartInput(attribute, restarting); } @Override public void onFinishInput() { super.onFinishInput(); } public KeyboardView.OnKeyboardActionListener kListener = new KeyboardView.OnKeyboardActionListener() { @Override public void onKey(int keyCode, int[] otherKeyCodes) { if(keyCode==Keyboard.KEYCODE_CANCEL) handleClose(); if(keyCode==10) getCurrentInputConnection().commitText(String.valueOf((char) keyCode), 1); //keyCode RETURN } @Override public void onPress(int primaryCode) {} // TODO Auto-generated method stub @Override public void onRelease(int primaryCode) {} // TODO Auto-generated method stub @Override public void onText(CharSequence text) {} // TODO Auto-generated method stub @Override public void swipeDown() {} // TODO Auto-generated method stub @Override public void swipeLeft() {} // TODO Auto-generated method stub @Override public void swipeRight() {} // TODO Auto-generated method stub @Override public void swipeUp() {} // TODO Auto-generated method stub }; private void handleClose() { requestHideSelf(0); kView.closing(); } } GestureKeyboard.java package com.android.jt.gestureIME; import android.content.Context; import android.inputmethodservice.Keyboard; public class GestureKeyboard extends Keyboard { public GestureKeyboard(Context context, int xmlLayoutResId) { super(context, xmlLayoutResId); } } GesureKeyboardView.java package com.android.jt.gestureIME; import android.content.Context; import android.inputmethodservice.KeyboardView; import android.inputmethodservice.Keyboard.Key; import android.util.AttributeSet; public class GestureKeyboardView extends KeyboardView { public GestureKeyboardView(Context context, AttributeSet attrs) { super(context, attrs); } public GestureKeyboardView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } @Override protected boolean onLongPress(Key key) { return super.onLongPress(key); } } keyboard.xml <?xml version="1.0" encoding="utf-8"?> <Keyboard xmlns:android="http://schemas.android.com/apk/res/android" android:keyWidth="10%p" android:horizontalGap="0px" android:verticalGap="0px" android:keyHeight="@dimen/key_height" > <Row android:rowEdgeFlags="bottom"> <Key android:codes="-3" android:keyLabel="Close" android:keyWidth="20%p" android:keyEdgeFlags="left"/> <Key android:codes="10" android:keyLabel="Return" android:keyWidth="20%p" android:keyEdgeFlags="right"/> </Row> </Keyboard> input.xml <?xml version="1.0" encoding="utf-8"?> <com.android.jt.gestureIME.GestureKeyboardView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gkeyboard" android:layout_alignParentBottom="true" android:layout_width="fill_parent" android:layout_height="wrap_content" />

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Generating strongly biased radom numbers for tests

    - by nobody
    I want to run tests with randomized inputs and need to generate 'sensible' random numbers, that is, numbers that match good enough to pass the tested function's preconditions, but hopefully wreak havoc deeper inside its code. math.random() (I'm using Lua) produces uniformly distributed random numbers. Scaling these up will give far more big numbers than small numbers, and there will be very few integers. I would like to skew the random numbers (or generate new ones using the old function as a randomness source) in a way that strongly favors 'simple' numbers, but will still cover the whole range, I.e. extending up to positive/negative infinity (or ±1e309 for double). This means: numbers up to, say, ten should be most common, integers should be more common than fractions, numbers ending in 0.5 should be the most common fractions, followed by 0.25 and 0.75; then 0.125, and so on. A different description: Fix a base probability x such that probabilities will sum to one and define the probability of a number n as xk where k is the generation in which n is constructed as a surreal number1. That assigns x to 0, x2 to -1 and +1, x3 to -2, -1/2, +1/2 and +2, and so on. This gives a nice description of something close to what I want (it skews a bit too much), but is near-unusable for computing random numbers. The resulting distribution is nowhere continuous (it's fractal!), I'm not sure how to determine the base probability x (I think for infinite precision it would be zero), and computing numbers based on this by iteration is awfully slow (spending near-infinite time to construct large numbers). Does anyone know of a simple approximation that, given a uniformly distributed randomness source, produces random numbers very roughly distributed as described above? I would like to run thousands of randomized tests, quantity/speed is more important than quality. Still, better numbers mean less inputs get rejected. Lua has a JIT, so performance can't be reasonably predicted. Jumps based on randomness will break every prediction, and many calls to math.random() will be slow, too. This means a closed formula will be better than an iterative or recursive one. 1 Wikipedia has an article on surreal numbers, with a nice picture. A surreal number is a pair of two surreal numbers, i.e. x := {n|m}, and its value is the number in the middle of the pair, i.e. (for finite numbers) {n|m} = (n+m)/2 (as rational). If one side of the pair is empty, that's interpreted as increment (or decrement, if right is empty) by one. If both sides are empty, that's zero. Initially, there are no numbers, so the only number one can build is 0 := { | }. In generation two one can build numbers {0| } =: 1 and { |0} =: -1, in three we get {1| } =: 2, {|1} =: -2, {0|1} =: 1/2 and {-1|0} =: -1/2 (plus some more complex representations of known numbers, e.g. {-1|1} ? 0). Note that e.g. 1/3 is never generated by finite numbers because it is an infinite fraction – the same goes for floats, 1/3 is never represented exactly.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >