Search Results

Search found 5751 results on 231 pages for 'analysis patterns'.

Page 197/231 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Async.Parallel or Array.Parallel.Map ?

    - by gurteen2
    Hello- I'm trying to implement a pattern I read from Don Syme's blog (http://blogs.msdn.com/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx) which suggests that there are opportunities for massive performance improvements from leveraging asynchronous I/O. I am currently trying to take a piece of code that "works" one way, using Array.Parallel.Map, and see if I can somehow achieve the same result using Async.Parallel, but I really don't understand Async.Parallel, and cannot get anything to work. I have a piece of code (simplified below to illustrate the point) that successfully retrieves an array of data for one cusip. (A price series, for example) let getStockData cusip = let D = DataProvider() let arr = D.GetPriceSeries(cusip) return arr let data = Array.Parallel.map (fun x -> getStockData x) stockCusips So this approach contructs an array of arrays, by making a connection over the internet to my data vendor for each stock (which could be as many as 3000) and returns me an array of arrays (1 per stock, with a price series for each one). I admittedly don't understand what goes on underneath Array.Parallel.map, but am wondering if this is a scenario where there are resources wasted under the hood, and it actually could be faster using asynchronous I/O? So to test this out, I have attempted to make this function using asyncs, and I think that the function below follows the pattern in Don Syme's article using the URLs, but it won't compile with "let!". let getStockDataAsync cusip = async { let D = DataProvider() let! arr = D.GetData(cusip) return arr } The error I get is: This expression was expected to have type Async<'a but here has type obj It compiles fine with "let" instead of "let!", but I had thought the whole point was that you need the exclamation point in order for the command to run without blocking a thread. So the first question really is, what's wrong with my syntax above, in getStockDataAsync, and then at a higher level, can anyone offer some additional insight about asychronous I/O and whether the scenario I have presented would benefit from it, making it potentially much, much faster than Array.Parallel.map? Thanks so much.

    Read the article

  • Creating Array of settings names and values using ADO.NET Entities

    - by jordan.baucke
    I'm using an ADO.NET Entities (.edmx) data-model along with MVC2 to build an application. I have a DB table where I want to store settings for method that run elsewhere. MVC2 allows me to create a view, editor, etc. to update this table which is great, but now when I want to do simple assignments based on column titles I'm a bit confused. For example, I would like to easily build an array that I could offset into the record's value based on it's "Title" Column: var entities = new ManagerEntities(); Setting[] settings = entities.settings.ToArray(); This returns something like: Settings[0].[SettingTitle][SettingValue] However, I would like to more easily index into the value than having to loop through all the returned settings, when they're already index. string URL_ID_NEED = [NeededUrl][http://www.url.com] Am I missing something relatively simple? Thanks! ========================= *Update* ========================= Ok, I think I've got a solution, but I'm wondering why this would be so complicated, and if I'm just not thinking of the right context for ADO.NET objects, here's what I did: public string GetSetting(string SettingName) { var entities = new LabelManagerEntities(); IEnumerable<KeyValuePair<string, object>> entityKeyValues = new KeyValuePair<string, object>[] { new KeyValuePair<string, object>("SettingTitle", SettingName) }; EntityKey key = new EntityKey("LabelManagerEntities.Settings", entityKeyValues); // Get the object from the context or the persisted store by its key. Setting settingvalue = (Setting)entities.GetObjectByKey(key); return settingvalue.SettingValue.ToString(); } This method handles the job of querying the Entities by "Key" to get back the correct value as a returned string (which I can than strip out the " ", or or cast to an integer, etc. etc.,) Am I just duplicating functionality that already exists in ADO.NET's design patterns (I'm pretty new to it) -- or is this a reasonable solution?

    Read the article

  • model not showing up in django admin.

    - by Zayatzz
    Hi. I have ceated several django apps and stuffs for my own fund and so far everything has been working fine. Now i just created new project (django 1.2.1) and have run into trouble from 1st moments. I created new app - game and new model Game. i created admin.py and put related stuff into it. Ran syncdb and went to check into admin. Model did not show up there. I proceeded to check and doublecheck and read through previous similar threads: http://stackoverflow.com/questions/1839927/registered-models-do-not-show-up-in-admin http://stackoverflow.com/questions/1694259/django-app-not-showing-up-in-admin-interface But as far as i can tell, they dont help me either. Perhaps someone else can point this out for me. models.py in game app: # -*- coding: utf-8 -*- from django.db import models class Game(models.Model): type = models.IntegerField(blank=False, null=False, default=1) teamone = models.CharField(max_length=100, blank=False, null=False) teamtwo = models.CharField(max_length=100, blank=False, null=False) gametime = models.DateTimeField(blank=False, null=False) admin.py in game app: # -*- coding: utf-8 -*- from jalka.game.models import Game from django.contrib import admin class GameAdmin(admin.ModelAdmin): list_display = ['type', 'teamone', 'teamtwo', 'gametime'] admin.site.register(Game, GameAdmin) project settings.py: MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', ) ROOT_URLCONF = 'jalka.urls' TEMPLATE_DIRS = ( "/home/projects/jalka/templates/" ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.admin', 'game', ) urls.py: from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^jalka/', include('jalka.foo.urls')), (r'^admin/', include(admin.site.urls)), ) Alan.

    Read the article

  • Gmail IMAP OAuth for desktop clients

    - by Sabya
    Recently Google announced that they are supporting OAUth for Gmail IMAP/SMTP. I browsed through their multiple documentations, but still I am confused about if they support OAuth for installed applications. 1. In this documentation they say: Note: Though the OAuth protocol supports the desktop/installed application use case, Google only supports OAuth for web applications. But they also have a document for OAuth for installed applications. 2. When I read the OAuth specification pointed by them, it says (in section 11.7): In many applications, the Consumer application will be under the control of potentially untrusted parties. For example, if the Consumer is a freely available desktop application, an attacker may be able to download a copy for analysis. In such cases, attackers will be able to recover the Consumer Secret used to authenticate the Consumer to the Service Provider. Also I think the disclaimer in point 1 above is about Google Data APIs, and surely IMAP/SMTP is not a part of them. I understand that for installed applications I can have a setup like: Have a small web-app at say example.com for my application. This web-app talks to Google gets the access token. The installed application talks to example.com only to get the access token. Installed application then talks to Google with the access token. I am now confused. Is this the only way?

    Read the article

  • How to arrange models, views, controllers in a new Kohana 3 project

    - by Pekka
    I'm looking at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. Example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about learning the framework soon enough, I'm a bit at a loss on how to arrange a healthy code base from the start - i.e. how to split up controllers, how to name them, and how to separate the functionality into the appropriate models. My application could, in its core, be described as a business directory with a main businesses table. Businesses can be listed by category and by street name. Each business has a detail page. Business owners can log in and edit their business's entry. Businesses can post offers into an offers table. I know this is pretty vague, I'll be more than happy to go into more detail on request. Supposing I have all the basic functionality worked out and in place already - list all businesses, edit business, list businesses by street name, create offer, and so on, and I'm just looking for how to fit the functionality into a Kohana application structure that can be easily extended: Do you know real-life, publicly accessible examples of applications built on Kohana 3 where I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know application structuring HOWTOs or best practices for Kohana 3 not mentioned in the user guide and the inofficial Wiki? Have you built something similar and could give me some recommendations?

    Read the article

  • Looking for ideas for a simple pattern matching algorithm to run on a microcontroller

    - by pic_audio
    I'm working on a project to recognize simple audio patterns. I have two data sets, each made up of between 4 and 32 note/duration pairs. One set is predefined, the other is from an incoming data stream. The length of the two strongly correlated data sets is often different, but roughly the same "shape". My goal is to come up with some sort of ranking as to how well the two data sets correlate/match. I have converted the incoming frequencies to pitch and shifted the incoming data stream's pitch so that it's average pitch matches that of the predefined data set. I also stretch/compress the incoming data set's durations to match the overall duration of the predefined set. Here are two graphical examples of data that should be ranked as strongly correlated: http://s2.postimage.org/FVeG0-ee3c23ecc094a55b15e538c3a0d83dd5.gif (Sorry, as a new user I couldn't directly post images) I'm doing this on a 8-bit microcontroller so resources are minimal. Speed is less an issue, a second or two of processing isn't a deal breaker. It wouldn't surprise me if there is an obvious solution, I've just been staring at the problem too long. Any ideas? Thanks in advance...

    Read the article

  • Eclipse RCP: Using different JUnit version (4.3.1) than shipped with eclipse galileo (4.5.0)

    - by Skrrytch
    Hallo, I have the following situation: We are developing an Eclipse RCP Application and want to switch from Eclipse 3.4 to Eclipse 3.5. Our JUnit-Tests are using JUnit 4.3.1 and we have a launch configuration to start our test suite. I think I don't need to go into more details here. The problem is: Running the tests with Eclipse 3.5 does not work: JUnit cannot find any annotations in the test classes (neither (at)Test nor (at)RunWith). I patched the junit library with some logging output to check what is going on. I found out that this problem is a classloading issue: The test class passed to JUnit 'lies in' a ClassLoader which is different from the one JUnit uses to load the annotation classes like 'RunWith'. This is not the case in Eclipse 3.4 in org.junit.internal.requests.ClassRequest: public Runner getRunner() { log("TestClass ClassLoader: "+this.fTestClass.getClassLoader()); log("RunWith.class ClassLoader: "+RunWith.class.getClassLoader()); ... // validating test class: searching for annotations and more } The first line prints another classloader than the second line. This is bad because JUnit cannot match the annotations in the test class with the Annotation-Class (here: RunWith.class): "RunWith" in CL1 is not equal to "RunWith" in CL2. I have a solution which points to the core problem: Replace JUnit 4.5 in Eclipse Galileo with JUnit 4.3.1 so that there is only one JUnit-Version: The Test-Run and the tests classes are both using JUnit 4.3.1 (I had to patch "org.eclipse.jdt.junit4.runtime" to accept an ealier junit version). I think I can also replace JUnit 4.3.1 in my test class with Version 4.5, but that is not an option yet. Guess: The classloaders are different because the classes 'come from' different JUnit-Bundles: the testclass with its annotations from version 4.3.1 and the test runs in version 4.5 What I want to know: Is there any other solution besides patching Eclipse (replace JUnit versions)? Any commandline argument or such? Any configuration to force Eclipse to Use JUnit 4.3.1? Any hints on the above described analysis are welcome!

    Read the article

  • To GAC, or not to GAC?

    - by Jagd
    I have a data access layer (DAL) that is written in ASP.NET 3.5 and uses the Microsoft patterns & practices libraries (hereafter referred to as P&P) in order to accomplish its data access. I installed P&P and it resides in my GAC, so, logically, my DAL references it in the GAC. Therefore, the P&P libraries are never pulled down to the bin folder of my DAL. I use this DAL project in at least five (more than that even, but I'm too lazy to try to count them all) different websites. And this has all worked just fine for me because I'm the only developer who works on these websites. But, now I have other developers who are going to work on some of these websites. The problem: if a developer pulls the DAL project down from our code repository, it won't build for them if they don't have the P&P libraries installed. My question: should I expect the developers to install the P&P libraries, or should I just dump them in the bin folder and be done with it? I realize that dumping them into the bin folder is probably the easiest way to deal with the problem, but I've never been a big fan of the bin folder if I can reference them in the GAC instead.

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • Taking "do the simplest thing that could possible work" too far in TDD: testing for a file-name kno

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything nor throw anything processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • Applying business logic to form elements in ASP.NET MVC

    - by Brettski
    I am looking for best practices in applying business logic to form elements in an ASP.NET MVC application. I assume the concepts would apply to most MVC patterns. The goal is to have all the business logic stem from the same place. I have a basic form with four elements: Textbox: for entering data Checkbox: for staff approval Checkbox: for client approval Button: for submitting form The textbox and two check boxes are fields in a database accessed using LINQ to SQL. What I want to do is put logic around the check boxes on who can check them and when. True table (little silly but it's an example): when checked || may check Staff || may check Client Staff | Client || Staff | Client || Staff | Client 0 0 || 1 0 0 1 0 1 || 0 0 0 1 1 0 || 1 0 0 1 1 1 || 0 0 0 1 There are to security roles, staff and client; a person's role determines who they are, the roles are maintained in the database alone with current state of the check boxes. So I can simply store the users roll in the view class and enable and disable check boxes based on their role, but this doesn't seem proper. That is putting logic in UI to control of which actions can be taken. How do I get most of this control down into the model? I mean I need to control which check boxes are enabled and then check the results in the model when the form is posted, so it seems the best place for it to originate. I am looking for a good approach to constructing this, something to follow as I build the application. If you know of some great references which explain these best practices that is really appreciated too.

    Read the article

  • How to avoid XCode framework weak-linking problems?

    - by Frank R.
    Hi, I'm building an application that takes advantage of Mac OS X 10.6-only technologies, but without giving up backwards compatibility to 10.5 Leopard. The way I do this is by setting the 10.6 SDK as the base SDK, weak-linking all frameworks and setting the deployment target to 10.5 as described in: http://developer.apple.com/mac/library/DOCUMENTATION/MacOSX/Conceptual/BPFrameworks/Concepts/WeakLinking.html This works fine; before making a call that is Snow Leopard-only I need to check that the selector or indeed the class actually exist. Or I can just check the OS version before making the call. The problem is that this is incredibly fragile. If I make a single call that is 10.6 only I blow Leopard-compatibility. So using even the normal code code completion feature can be dangerous. My question: is there any way of checking which calls are not defined on 10.5 before doing a release build? Some kind of static analysis, or even just a trick (a target set the other SDK?) would do. I obviously should test on a Leopard machine before releasing anything, but even so I can't possibly go through all paths of the program before every release. Any advice would be appreciated. Best regards, Frank

    Read the article

  • ASP.NET MVC: Html.Actionlink() generates empty link.

    - by wh0emPah
    Okay i'm experiencing some problems with the actionlink htmlhelper. I have some complicated routing as follows: routes.MapRoute("Groep_Dashboard_Route", // Route name "{EventName}/{GroupID}/Dashboard", // url with Paramters new {controller = "Group", action="Dashboard"}); routes.MapRoute("Event_Groep_Route", // Route name "{EventName}/{GroupID}/{controller}/{action}/{id}", new {controller = "Home", action = "Index"}); My problem is generating action links that match these patterns. The eventname parameter is really just for having a user friendly link. it doesn't do anything. Now when i'm trying for example to generate a link. that shows the dashboard of a certain groep. Like: mysite.com/testevent/20/Dashboard I'll use the following actionlink: <%: Html.ActionLink("Show dashboard", "Group", "Dashboard", new { EventName_Url = "test", GroepID = item.groepID}, null)%> What my actual result in html gives is: <a href="">Show Dashboard</a> Please bear with me i'm still new at ASP MVC. Could someone tell me what i'm doing wrong? Help would be appreciated!

    Read the article

  • Parsing a file in C

    - by sfactor
    I need parse through a file and do some processing into it. The file is a text file and the data is a variable length data of the form "PP1004181350D001002003..........". So there will be timestamps if there is PP so 1004181350 is 2010-04-08 13:50. The ones where there are D are the data points that are three separate data each three digits long, so D001002003 has three coordonates of 001, 002 and 003. Now I need to parse this data from a file for which I need to store each timestamp into a array and the corresponding datas into arrays that has as many rows as the number of data and three rows for each co-ordinate. The end array might be like TimeStamp[1] = "135000", low[1] = "001", medium[1] = "002", high[1] = "003" TimeStamp[2] = "135015", low[2] = "010", medium[2] = "012", high[2] = "013" TimeStamp[3] = "135030", low[3] = "051", medium[3] = "052", high[3] = "043" .... The question is how do I go about doing this in C? How do I go through this string looking for these patterns? Note: Here the seconds value in timestamp is added on our own as it is known at each data comes after 15 seconds.

    Read the article

  • Check for valid IMEI

    - by Tim
    Hi, does somebody knows how to check for a valid IMEI? I have found a function to check on this page: http://www.dotnetfunda.com/articles/article597-imeivalidator-in-vbnet-.aspx But it returns false for valid IMEI's (f.e. 352972024585360). I can validate them online on this page: http://www.numberingplans.com/?page=analysis&sub=imeinr What is the correct way(in VB.Net) to check if a given IMEI is valid? Regards, Tim PS: This function from above page must be incorrect in some way: Public Shared Function isImeiValid(ByVal IMEI As String) As Boolean Dim cnt As Integer = 0 Dim nw As String = String.Empty Try For Each c As Char In IMEI cnt += 1 If cnt Mod 2 <> 0 Then nw += c Else Dim d As Integer = Integer.Parse(c) * 2 ' Every Second Digit has to be Doubled nw += d.ToString() ' Genegrated a new number with doubled digits End If Next Dim tot As Integer = 0 For Each ch As Char In nw.Remove(nw.Length - 1, 1) tot += Integer.Parse(ch) ' Adding all digits together Next Dim chDigit As Integer = 10 - (tot Mod 10) ' Finding the Check Digit my Finding the Remainder of the sum and subtracting it from 10 If chDigit = Integer.Parse(IMEI(IMEI.Length - 1)) Then ' Checking the Check Digit with the last digit of the Given IMEI code Return True Else Return False End If Catch ex As Exception Return False End Try End Function

    Read the article

  • File Uploading In Google Application Engine Using Django

    - by Ayush
    I am using gae with django. I have an project named MusicSite with following url mapping- urls.py from django.conf.urls.defaults import * from MusicSite.views import MainHandler from MusicSite.views import UploadHandler from MusicSite.views import ServeHandler urlpatterns = patterns('',(r'^start/', MainHandler), (r'^upload/', UploadHandler), (r'^/serve/([^/]+)?', ServeHandler), ) There is an application MusicSite inside MusicFun with the following codes- views.py import os import urllib from google.appengine.ext import blobstore from google.appengine.ext import webapp from google.appengine.ext.webapp import blobstore_handlers from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app def MainHandler(request): response=HttpResponse() upload_url = blobstore.create_upload_url('http://localhost: 8000/upload/') response.write('') response.write('' % upload_url) response.write("""Upload File: """) return HttpResponse(response) def UploadHandler(request): upload_files=request.FILES['file'] blob_info = upload_files[0] response.redirect('http://localhost:8000/serve/%s' % blob_info.key()) class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler): def get(self, resource): resource = str(urllib.unquote(resource)) blob_info = blobstore.BlobInfo.get(resource) self.send_blob(blob_info) now whenever a upload a file using /start and click Submit i am taken to a blank page with the following url- localhost:8000/_ah/upload/ahhnb29nbGUtYXBwLWVuZ2luZS1kamFuZ29yGwsSFV9fQmxvYlVwbG9hZFNlc3Npb25fXxgHDA These random alphabets keep varying but the result is same. A blank page after every upload. Somebody please help. The server responses are as below- INFO:root:"GET /start/ HTTP/1.1" 200 - INFO:root:"GET /favicon.ico HTTP/1.1" 404 - INFO:root:Internal redirection to http://localhost:8000/upload/ INFO:root:Upload handler returned 500 ERROR:root:Invalid upload handler response. Only 301, 302 and 303 statuses are permitted and it may not have a content body. INFO:root:"POST /_ah/upload/ ahhnb29nbGUtYXBwLWVuZ2luZS1kamFuZ29yGwsSFV9fQmxvYlVwbG9hZFNlc3Npb25fXxgCDA HTTP/1.1" 500 - INFO:root:"GET /favicon.ico HTTP/1.1" 404 -

    Read the article

  • Dashboard for collaborative science / data processing projects

    - by rescdsk
    Hi, Continuous Integration servers like Hudson are a pretty amazing addition to software development. I work in an academic research lab, and I'd love to apply similar principles to scientific data analysis. I want a dashboard-like view of which collections of data are fine, which ones are failing their tests (simple shell scripts, mostly), and so on. A lot like the Chromium dashboard (WARNING: page takes a long time to load). It takes work from at least 4 people, and maybe 10 or 12 hours of computer time, to bring our data (from behavioral studies) from its raw form to its final, easily-analyzed form. I've tried Hudson and buildbot, but neither is really appropriate to our workflow. We just want to run a bunch of tests on maybe fifty independent collections of subject data, and display the results nicely. SO! Does anyone have a recommendation of a way to generate this kind of report easily? Or, can you think of a good way to shoehorn this kind of workflow into a continuous integration server? Or, can you recommend a unit testing dashboard that could deal with tests that are little shell scripts rather than little functions? Thank you!

    Read the article

  • Useful training courses that aren't specific to a single technology

    - by Dave Turvey
    I have possibly the best problem in the world. I have about £1600 left in a training budget and I need to find something to spend it on. I can spend it on anything that could be considered training. Books, courses, conferences, etc. I would like to find a course that would benifit a software developer but is not about learning a specific programming technology. I don't really want to spend it on a technical training course. These topics are usually best learned with a good book and some trial and error. I have also already been on a general business/management training course and a PRINCE2 project management course. I am currently working on a project on my own so am responsible for communicating with the client, requirements gathering, project management, etc., as well as the coding. What training have you found useful outside the usual technical stuff? Has anyone done any business analysis courses? What were they like? Are there any courses on some of the practicalities of working with software, e.g. automated test and deployment strategies, handling technical support? I would prefer a course in the UK but I can travel if necessary.

    Read the article

  • sharing build artifacts between jobs in hudson

    - by programming panda
    Hi I'm trying to set up our build process in hudson. Job 1 will be a super fast (hopefully) continuous integration build job that will be built frequently. Job 2, will be responsible for running a comprehensive test suite, at a regular interval or triggered manually. Job 3 will be responsible for running analysis tools across the codebase (much like Job 2). I tried using the "Advanced Projects Options use custom workspace" feature so that code compiled in Job 1 can be used in Job 2 and 3. However, it seems that all build artifacts remain inside that Job 1 workspace. I'm I doing this right? Is there a better way of doing this? I guess I'm looking for something similar to a build pipeline setup...so that things can be shared and the appropriate jobs can be executed in stages. (I also considered using 'batch tasks'...but it seems like those can't be scheduled? only triggered manually?) Any suggestions are welcomed. Thanks!

    Read the article

  • ASP MVC Ajax Controller pattern?

    - by Kevin Won
    My MVC app tends to have a lot of ajax calls (via JQuery.get()). It's sort of bugging me that my controller is littered with many tiny methods that get called via ajax. It seems to me to be sort of breaking the MVC pattern a bit--the controller is now being more of a data access component then a URI router. I refactored so that I have my 'true' controller for a page just performing standard routing responses (returing ActionResponse objects). So a call to /home/ will obviously kick up the HomeController class that will respond in the canonical controller fashion by returning a plain-jane View. I then moved my ajax stuff into a new controller class whose name I'm prefacing with 'Ajax'. So, for example, my page might have three different sections of functionality (say shopping cart or user account). I have an ajax controller for each of these (AjaxCartController, AjaxAccountController). There is really nothing different about moving the ajax call stuff into its own class--it's just to keep things cleaner. on client side obviously the JQuery would then use this new controller thusly: //jquery pseudocode call to specific controller that just handles ajax calls $.get('AjaxAccount/Details'.... (1) is there a better pattern in MVC for responding to ajax calls? (2) It seems to me that the MVC model is a bit leaky when it comes to ajax--it's not really 'controlling' stuff. It just happens to be the best and least painful way of handling ajax calls (or am I ignorant)? In other words, the 'Controller' abstraction doesn't seem to play nice with Ajax (at least from a patterns perspective). Is there something I'm missing?

    Read the article

  • JAVA image transfer problem

    - by user579098
    Hi, I have a school assignment, to send a jpg image,split it into groups of 100 bytes, corrupt it, use a CRC check to locate the errors and re-transmit until it eventually is built back into its original form. It's practically ready, however when I check out the new images, they appear with errors.. I would really appreciate if someone could look at my code below and maybe locate this logical mistake as I can't understand what the problem is because everything looks ok :S For the file with all the data needed including photos and error patterns one could download it from this link:http://rapidshare.com/#!download|932tl2|443122762|Data.zip|739 Thanks in advance, Stefan p.s dont forget to change the paths in the code for the image and error files package networks; import java.io.*; // for file reader import java.util.zip.CRC32; // CRC32 IEEE (Ethernet) public class Main { /** * Reads a whole file into an array of bytes. * @param file The file in question. * @return Array of bytes containing file data. * @throws IOException Message contains why it failed. */ public static byte[] readFileArray(File file) throws IOException { InputStream is = new FileInputStream(file); byte[] data=new byte[(int)file.length()]; is.read(data); is.close(); return data; } /** * Writes (or overwrites if exists) a file with data from an array of bytes. * @param file The file in question. * @param data Array of bytes containing the new file data. * @throws IOException Message contains why it failed. */ public static void writeFileArray(File file, byte[] data) throws IOException { OutputStream os = new FileOutputStream(file,false); os.write(data); os.close(); } /** * Converts a long value to an array of bytes. * @param data The target variable. * @return Byte array conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static byte[] toByta(long data) { return new byte[] { (byte)((data >> 56) & 0xff), (byte)((data >> 48) & 0xff), (byte)((data >> 40) & 0xff), (byte)((data >> 32) & 0xff), (byte)((data >> 24) & 0xff), (byte)((data >> 16) & 0xff), (byte)((data >> 8) & 0xff), (byte)((data >> 0) & 0xff), }; } /** * Converts a an array of bytes to long value. * @param data The target variable. * @return Long value conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static long toLong(byte[] data) { if (data == null || data.length != 8) return 0x0; return (long)( // (Below) convert to longs before shift because digits // are lost with ints beyond the 32-bit limit (long)(0xff & data[0]) << 56 | (long)(0xff & data[1]) << 48 | (long)(0xff & data[2]) << 40 | (long)(0xff & data[3]) << 32 | (long)(0xff & data[4]) << 24 | (long)(0xff & data[5]) << 16 | (long)(0xff & data[6]) << 8 | (long)(0xff & data[7]) << 0 ); } public static byte[] nextNoise(){ byte[] result=new byte[100]; // copy a frame's worth of data (or remaining data if it is less than frame length) int read=Math.min(err_data.length-err_pstn, 100); System.arraycopy(err_data, err_pstn, result, 0, read); // if read data is less than frame length, reset position and add remaining data if(read<100){ err_pstn=100-read; System.arraycopy(err_data, 0, result, read, err_pstn); }else // otherwise, increase position err_pstn+=100; // return noise segment return result; } /** * Given some original data, it is purposefully corrupted according to a * second data array (which is read from a file). In pseudocode: * corrupt = original xor corruptor * @param data The original data. * @return The new (corrupted) data. */ public static byte[] corruptData(byte[] data){ // get the next noise sequence byte[] noise = nextNoise(); // finally, xor data with noise and return result for(int i=0; i<100; i++)data[i]^=noise[i]; return data; } /** * Given an array of data, a packet is created. In pseudocode: * frame = corrupt(data) + crc(data) * @param data The original frame data. * @return The resulting frame data. */ public static byte[] buildFrame(byte[] data){ // pack = [data]+crc32([data]) byte[] hash = new byte[8]; // calculate crc32 of data and copy it to byte array CRC32 crc = new CRC32(); crc.update(data); hash=toByta(crc.getValue()); // create a byte array holding the final packet byte[] pack = new byte[data.length+hash.length]; // create the corrupted data byte[] crpt = new byte[data.length]; crpt = corruptData(data); // copy corrupted data into pack System.arraycopy(crpt, 0, pack, 0, crpt.length); // copy hash into pack System.arraycopy(hash, 0, pack, data.length, hash.length); // return pack return pack; } /** * Verifies frame contents. * @param frame The frame data (data+crc32). * @return True if frame is valid, false otherwise. */ public static boolean verifyFrame(byte[] frame){ // allocate hash and data variables byte[] hash=new byte[8]; byte[] data=new byte[frame.length-hash.length]; // read frame into hash and data variables System.arraycopy(frame, frame.length-hash.length, hash, 0, hash.length); System.arraycopy(frame, 0, data, 0, frame.length-hash.length); // get crc32 of data CRC32 crc = new CRC32(); crc.update(data); // compare crc32 of data with crc32 of frame return crc.getValue()==toLong(hash); } /** * Transfers a file through a channel in frames and reconstructs it into a new file. * @param jpg_file File name of target file to transfer. * @param err_file The channel noise file used to simulate corruption. * @param out_file The name of the newly-created file. * @throws IOException */ public static void transferFile(String jpg_file, String err_file, String out_file) throws IOException { // read file data into global variables jpg_data = readFileArray(new File(jpg_file)); err_data = readFileArray(new File(err_file)); err_pstn = 0; // variable that will hold the final (transfered) data byte[] out_data = new byte[jpg_data.length]; // holds the current frame data byte[] frame_orig = new byte[100]; byte[] frame_sent = new byte[100]; // send file in chunks (frames) of 100 bytes for(int i=0; i<Math.ceil(jpg_data.length/100); i++){ // copy jpg data into frame and init first-time switch System.arraycopy(jpg_data, i*100, frame_orig, 0, 100); boolean not_first=false; System.out.print("Packet #"+i+": "); // repeat getting same frame until frame crc matches with frame content do { if(not_first)System.out.print("F"); frame_sent=buildFrame(frame_orig); not_first=true; }while(!verifyFrame(frame_sent)); // usually, you'd constrain this by time to prevent infinite loops (in // case the channel is so wacked up it doesn't get a single packet right) // copy frame to image file System.out.println("S"); System.arraycopy(frame_sent, 0, out_data, i*100, 100); } System.out.println("\nDone."); writeFileArray(new File(out_file),out_data); } // global variables for file data and pointer public static byte[] jpg_data; public static byte[] err_data; public static int err_pstn=0; public static void main(String[] args) throws IOException { // list of jpg files String[] jpg_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo1.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo2.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo3.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo4.jpg" }; // list of error patterns String[] err_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 1.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 2.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 3.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 4.DAT" }; // loop through all jpg/channel combinations and run tests for(int x=0; x<jpg_file.length; x++){ for(int y=0; y<err_file.length; y++){ System.out.println("Transfering photo"+(x+1)+".jpg using Pattern "+(y+1)+"..."); transferFile(jpg_file[x],err_file[y],jpg_file[x].replace("photo","CH#"+y+"_photo")); } } } }

    Read the article

  • Using {% url ??? %} in django templates

    - by user563247
    I have looked a lot on google for answers of how to use the 'url' tag in templates only to find many responses saying 'You just insert it into your template and point it at the view you want the url for'. Well no joy for me :( I have tried every permutation possible and have resorted to posting here as a last resort. So here it is. My urls.py looks like this: from django.conf.urls.defaults import * from login.views import * from mainapp.views import * import settings # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^weclaim/', include('weclaim.foo.urls')), (r'^login/', login_view), (r'^logout/', logout_view), ('^$', main_view), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^admin/', include(admin.site.urls)), #(r'^static/(?P<path>.*)$', 'django.views.static.serve',{'document_root': '/home/arthur/Software/django/weclaim/templates/static'}), (r'^static/(?P<path>.*)$', 'django.views.static.serve',{'document_root': settings.MEDIA_ROOT}), ) My 'views.py' in my 'login' directory looks like: from django.shortcuts import render_to_response, redirect from django.template import RequestContext from django.contrib import auth def login_view(request): if request.method == 'POST': uname = request.POST.get('username', '') psword = request.POST.get('password', '') user = auth.authenticate(username=uname, password=psword) # if the user logs in and is active if user is not None and user.is_active: auth.login(request, user) return render_to_response('main/main.html', {}, context_instance=RequestContext(request)) #return redirect(main_view) else: return render_to_response('loginpage.html', {'box_width': '402', 'login_failed': '1',}, context_instance=RequestContext(request)) else: return render_to_response('loginpage.html', {'box_width': '400',}, context_instance=RequestContext(request)) def logout_view(request): auth.logout(request) return render_to_response('loginpage.html', {'box_width': '402', 'logged_out': '1',}, context_instance=RequestContext(request)) and finally the main.html to which the login_view points looks like: <html> <body> test! <a href="{% url logout_view %}">logout</a> </body> </html> So why do I get 'NoReverseMatch' every time? *(on a slightly different note I had to use 'context_instance=RequestContext(request)' at the end of all my render-to-response's because otherwise it would not recognise {{ MEDIA_URL }} in my templates and I couldn't reference any css or js files. I'm not to sure why this is. Doesn't seem right to me)*

    Read the article

  • Can I make a "TCP packet modifier" using tun/tap and raw sockets?

    - by benhoyt
    I have a Linux application that talks TCP, and to help with analysis and statistics, I'd like to modify the data in some of the TCP packets that it sends out. I'd prefer to do this without hacking the Linux TCP stack. The idea I have so far is to make a bridge which acts as a "TCP packet modifier". My idea is to connect to the application via a tun/tap device on one side of the bridge, and to the network card via raw sockets on the other side of the bridge. My concern is that when you open a raw socket it still sends packets up to Linux's TCP stack, and so I couldn't modify them and send them on even if I wanted to. Is this correct? A pseudo-C-code sketch of the bridge looks like: tap_fd = open_tap_device("/dev/net/tun"); raw_fd = open_raw_socket(); for (;;) { select(fds = [tap_fd, raw_fd]); if (FD_ISSET(tap_fd, &fds)) { read_packet(tap_fd); modify_packet_if_needed(); write_packet(raw_fd); } if (FD_ISSET(raw_fd, &fds)) { read_packet(raw_fd); modify_packet_if_needed(); write_packet(tap_fd); } } Does this look possible, or are there other better ways of achieving the same thing? (TCP packet bridging and modification.)

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Refactoring Bloated ViewModel

    - by Holy Christ
    Hi, I am writing a PRISM/MVVM/WPF application. It's a LOB application, so there are a lot of complicated rules. I've noticed the View Model is starting to get bloated. There are two main issues. One is that to maintain MVVM, I'm doing a lot of things that feel hacky like adding a bunch of properties to my VM. The view binds to those properties to keep track of what feels like view specific information. For example, a boolean keeping track of the status of a long running process in the VM, so the view can disable some of its controls while the long running process is working. I've read that this issue could be solved with Attached Behaviors. I'll look more into that. In the example MVVM apps you see online, this isn't a big deal because they are over-simplified. The other issue is the number of commands in my VM. Right now there are four commands. I'm defining the commands in the VM using Josh Smith's RelayCommand (basically the DelegateCommand in PRISM) so all the business logic lives in the VM. I considered moving each command into separate unit of works. I'm not sure the best way to do this. Which patterns are you guys using to keep your VMs clean? I can already feel someone responding with "your view and VM is too complicated, you should break them into many view/VMs". It is certainly not too complicated from a Ux perspective - there are 2 buttons, a combobox, and a listbox. Also, from a logical perspective, it is one cohesive domain. Having said that, I'm very interested in hearing how others are dealing with this type of issue. Thanks for your input.

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >