Search Results

Search found 27800 results on 1112 pages for 'state machine'.

Page 187/1112 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Installing latest version of R-base

    - by Student
    I have been unsuccessfully trying to install the latest version (2.15.2) of r-base. Apparently, R package "Rcpp" would not install for R version 2.14.1 - the version that installs for me. I am not sure what/how/where to change my installation attempts which appear below. Please note that I am using ubuntu-12.04.1-server-i386. (1) ------------ The current installed version is R version 2.14.1 (2011-12-22) sudo apt-get install r-base Reading package lists... Done Building dependency tree Reading state information... Done r-base is already the newest version. (2) ------------ Including version information doesn't help: sudo apt-get install r-base=2.15.1-5ubuntu1 Reading package lists... Done Building dependency tree Reading state information... Done E: Version '2.15.1-5ubuntu1' for 'r-base' was not found (3) ------------- Changes based on CRAN Ubuntu instructions http://cran.r-project.org/bin/linux/ubuntu/README 3.1: Added to /etc/apt/sources.list deb http://lib.stat.cmu.edu/R/CRAN/bin/linux/ubuntu quantal/ 3.2: sudo apt-get update 3.3: sudo apt-get install r-base Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: r-base : Depends: r-base-core (= 2.15.2-1quantal2) but it is not going to be installed Depends: r-recommended (= 2.15.2-1quantal2) but it is not going to be installed Recommends: r-base-html but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Implementing AS 2.0 with FSM?

    - by Up2u
    i have seen many of references of AI and FSM like : http://www.richardlord.net/blog/fini...n-actionscript and sadly im still can't understand the point of the FSM on AS2.0 is it a must to create a class of each state ? i have a project of game and also it has an AI, the AI has 3 state n i said the state is distanceCheck, ChaseTarget, and Hit the target, the game that i create is FPS game and play via by mouse so what i mean is i have create an AI ( and is success ) but i want to convert it to FSM method ... i create : function of CheckDistanceState() and in that function i have to locked the target with an array, and sort it with the nearest distance and locked it and it trigger the function ChaseState(), and in the ChaseState() i insert the Hit() function to destroy the enemy, the 3 function that i created , i call it in the AI_cursor.onEnterframe, ( FPS game that only have a cursor in stage ) is there any chance to implement FSM to my code without to create a class ?? from what i read before , to create a class mean to create an external code outside of the frame ( i used to code in frame) and i stil dont understand about it. sorry if my explaination not clear ...

    Read the article

  • What are some techniques I can use to refactor Object Oriented code into Functional code?

    - by tieTYT
    I've spent about 20-40 hours developing part of a game using JavaScript and HTML5 canvas. When I started I had no idea what I was doing. So it started as a proof of concept and is coming along nicely now, but it has no automated tests. The game is starting to become complex enough that it could benefit from some automated testing, but it seems tough to do because the code depends on mutating global state. I'd like to refactor the whole thing using Underscore.js, a functional programming library for JavaScript. Part of me thinks I should just start from scratch using a Functional Programming style and testing. But, I think refactoring the imperative code into declarative code might be a better learning experience and a safer way to get to my current state of functionality. Problem is, I know what I want my code to look like in the end, but I don't know how to turn my current code into it. I'm hoping some people here could give me some tips a la the Refactoring book and Working Effectively With Legacy Code. For example, as a first step I'm thinking about "banning" global state. Take every function that uses a global variable and pass it in as a parameter instead. Next step may be to "ban" mutation, and to always return a new object. Any advice would be appreciated. I've never taken OO code and refactored it into Functional code before.

    Read the article

  • Persisting high score table in flash game without a network. (Featuring: HttpListenerException)

    - by bearcdp
    Hi everyone, this question is very programming-centric, but it's for a game so I figured I might as well post it here. I'm doing polishing work on a GGJ '11 game because it will be shown at an indie arcade tomorrow afternoon, and they're expecting our final build in the morning. We'd like to have a high score table that displays during attract mode, but since it's Flash (Flixel) it would require some networking, Mochi, or something to keep a record of these scores. Only problem is the machine we'd be running on probably won't have network access. As a quick solution, I thought I'd just write up a dinky little high score server in C#/.NET that could take basic GET requests for submitting scores and getting the score list. We're talking REAL basic, like blocking while waiting for an incoming request, run & forget console app, etc. There's no guarantee that our .swf won't get reloaded, and we'd like the scores to persist, so this server would pretty much exists to keep a safe copy of the scores that the game can add to and request, and occasionally the server will write the scores to a flat text file. But, HttpListener is giving me Error Code 87 'The parameter is incorrect.' Have any idea what I'm doing wrong? Or better yet, am I barking up the wrong tree and missing an obviously simpler solution? This is all I've got so far in my Main(): HttpListener listener = new HttpListener(); listener.Prefixes.Add("http://localhost:66666/"); listener.Start(); The exception happens at listener.Start(); and the stack trace is: at System.Net.HttpListener.AddAllPrefixes() at System.Net.HttpListener.Start() at WOSEBCE_ScoreServer.Program.Main(String[] args) in C:\Users\Michael\Documents\Visual Studio 2010\VS2010 Projects\WOSEBCE_ScoreServer\WOSEBCE_ScoreServer\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • MVC pattern synchronisation

    - by Hariprasad
    I am facing a problem in synchronizing my model and view threads I have a view which is table. In it, user can select a few rows. I update the view as soon as the user clicks on any row since I don't want the UI to be slow. This updating is done by a logic which runs in the controller thread below. At the same time, the controller will update the model data too, which takes place in a different thread. i.e., controller puts the query in a queue, which is then executed by the model thread - which is a single-threaded interface. As soon as the query executes, controller will get a signal. Now, In order to keep the view and model synchronized, I will update the view again based on the return value of the query (the data returned by model) - even though I updated the view already for that user action. But, I am facing issues because, its taking a lot of time for the model to return the result, by that time user would have performed multiple clicks. So, as a result of updating the view again based on the information from model, the view sometimes goes back to the state in which the previous clicks were made (Suppose user clicks thrice on different rows. I update the view as soon as the click happens. Also, I update the view when I get data back from the model - which is supposed to be same as the already updated state of the view. Now, when the user clicks third time, I get data for the first click from model. As a result, view goes back to a state which is generated by the first click) Is there any way to handle such a synchronization issue?

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • Boundary conditions for testing

    - by Loggie
    Ok so in a programming test I was given the following question. Question 1 (1 mark) Spot the potential bug in this section of code: void Class::Update( float dt ) { totalTime += dt; if( totalTime == 3.0f ) { // Do state change m_State++; } } The multiple choice answers for this question were. a) It has a constant floating point number where it should have a named constant variable b) It may not change state with only an equality test c) You don't know what state you are changing to d) The class is named poorly I wrongly answered this with answer C. I eventually received feedback on the answers and the feedback for this question was Correct answer is a. This is about understanding correct boundary conditions for tests. The other answers are arguably valid points, but do not indicate a potential bug in the code. My question here is, what does this have to do with boundary conditions? My understanding of boundary conditions is checking that a value is within a certain range, which isn't the case here. Upon looking over the question, in my opinion, B should be the correct answer when considering the accuracy issues of using floating point values.

    Read the article

  • Advanced Experiments with JavaScript, CSS, HTML, JavaFX, and Java

    - by Geertjan
    Once you're embedding JavaScript, CSS, and HTML into your Java desktop application, via the JavaFX browser, a whole range of new possibilities open up to you. For example, here's an impressive page on-line, notice that you can drag items and drop them in new places: http://nettuts.s3.amazonaws.com/127_iNETTUTS/demo/index.html The source code of the above is provided too, so you can drop the various files directly into your NetBeans module and use the JavaFX WebEngine to load the HTML page into the JavaFX browser. Once the JavaFX browser is in a NetBeans TopComponent, you'll have the start of an off-line news composer, something like this: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); webengine = view.getEngine(); URL url = getClass().getResource("index.html"); webengine.load(url.toExternalForm()); webengine.getLoadWorker().stateProperty().addListener( new ChangeListener() { @Override public void changed(ObservableValue ov, State oldState, State newState) { if (newState == State.SUCCEEDED) { Document document = (Document) webengine.executeScript("document"); NodeList list = document.getElementById("columns").getChildNodes(); for (int i = 0; i < list.getLength(); i++) { EventTarget et = (EventTarget) list.item(i); et.addEventListener("click", new EventListener() { @Override public void handleEvent(Event evt) { instanceContent.add(new Date()); } }, true); } } } }); The above is the code showing how, whenever a news item is clicked, the current date can be published into the Lookup. As you can see, I have a viewer component listening to the Lookup for dates.

    Read the article

  • Trouble installing gnome-shell-extensions-user-theme, dependency/PPA conflict?

    - by Drex
    I installed gnome tweak tool, and am trying to set up custom themes and whatnot. So, trying to install gnome-shell-extensions-user-theme. me@computer:~$ sudo apt-get install gnome-shell-extensions-user-theme [sudo] password for me: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gnome-shell-extensions-user-theme : Depends: gnome-shell-extensions-common but it is not going to be installed E: Unable to correct problems, you have held broken packages. Not going to be installed? Okay, let's see about that... me@computer:~$ sudo apt-get install gnome-shell-extensions-common Reading package lists... Done Building dependency tree Reading state information... Done gnome-shell-extensions-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Wait, what? Broken packages? Ruh Roh! Seems to me it might be a PPA contradiction problem or something, but I'm tired of trashing my installs. Kinda lost here. Any ideas? Output of sudo apt-get install -f drex@U110:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • How to choose between Tell don't Ask and Command Query Separation?

    - by Dakotah North
    The principle Tell Don't Ask says: you should endeavor to tell objects what you want them to do; do not ask them questions about their state, make a decision, and then tell them what to do. The problem is that, as the caller, you should not be making decisions based on the state of the called object that result in you then changing the state of the object. The logic you are implementing is probably the called object’s responsibility, not yours. For you to make decisions outside the object violates its encapsulation. A simple example of "Tell, don't Ask" is Widget w = ...; if (w.getParent() != null) { Panel parent = w.getParent(); parent.remove(w); } and the tell version is ... Widget w = ...; w.removeFromParent(); But what if I need to know the result from the removeFromParent method? My first reaction was just to change the removeFromParent to return a boolean denoting if the parent was removed or not. But then I came across Command Query Separation Pattern which says NOT to do this. It states that every method should either be a command that performs an action, or a query that returns data to the caller, but not both. In other words, asking a question should not change the answer. More formally, methods should return a value only if they are referentially transparent and hence possess no side effects. Are these two really at odds with each other and how do I choose between the two? Do I go with the Pragmatic Programmer or Bertrand Meyer on this?

    Read the article

  • Why does SQL 2005 SSIS component install fail?

    - by Ducain
    I am trying to install SSIS on our production SQL 2005 SP2 box. Each time I try, the install/setup screen results in failure, starting with the native client, and moving on down. Screen shots below show what I see: Here is the result of clicking on the status link to the right of the native client after the install failed: === Verbose logging started: 3/28/2012 16:38:08 Build type: SHIP UNICODE 3.01.4000.4042 Calling process: C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\setup.exe === MSI (c) (DC:00) [16:38:08:875]: Resetting cached policy values MSI (c) (DC:00) [16:38:08:875]: Machine policy value 'Debug' is 0 MSI (c) (DC:00) [16:38:08:875]: ******* RunEngine: ******* Product: {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} ******* Action: ******* CommandLine: ********** MSI (c) (DC:00) [16:38:08:875]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (DC:00) [16:38:08:875]: Grabbed execution mutex. MSI (c) (DC:00) [16:38:08:875]: Cloaking enabled. MSI (c) (DC:00) [16:38:08:875]: Attempting to enable all disabled priveleges before calling Install on Server MSI (c) (DC:00) [16:38:08:875]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (90:F0) [16:38:08:875]: Grabbed execution mutex. MSI (s) (90:D4) [16:38:08:875]: Resetting cached policy values MSI (s) (90:D4) [16:38:08:875]: Machine policy value 'Debug' is 0 MSI (s) (90:D4) [16:38:08:875]: ******* RunEngine: ******* Product: {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} ******* Action: ******* CommandLine: ********** MSI (s) (90:D4) [16:38:08:875]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (90:D4) [16:38:08:890]: Warning: Local cached package 'C:\WINDOWS\Installer\65eb99.msi' is missing. MSI (s) (90:D4) [16:38:08:890]: User policy value 'SearchOrder' is 'nmu' MSI (s) (90:D4) [16:38:08:890]: User policy value 'DisableMedia' is 0 MSI (s) (90:D4) [16:38:08:890]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Looking for sourcelist for product {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Adding {F9B3DD02-B0B3-42E9-8650-030DFF0D133D}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Now checking product {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Media is enabled for product. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Trying source C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\Cache\. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Source is invalid due to invalid package code (product code doesn't match). MSI (s) (90:D4) [16:38:08:890]: Note: 1: 1706 2: -2147483646 3: sqlncli.msi MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Processing net source list. MSI (s) (90:D4) [16:38:08:890]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Processing media source list. MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Trying media source F:\. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 2203 2: F:\sqlncli.msi 3: -2147287038 MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Processing URL source list. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Failed to resolve source MSI (s) (90:D4) [16:38:09:921]: MainEngineThread is returning 1612 MSI (c) (DC:00) [16:38:09:921]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (DC:00) [16:38:09:921]: MainEngineThread is returning 1612 === Verbose logging stopped: 3/28/2012 16:38:09 === Here is the log visible when I click the failed status for MSXML6: === Verbose logging started: 3/28/2012 16:38:12 Build type: SHIP UNICODE 3.01.4000.4042 Calling process: C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\setup.exe === MSI (c) (DC:58) [16:38:12:250]: Resetting cached policy values MSI (c) (DC:58) [16:38:12:250]: Machine policy value 'Debug' is 0 MSI (c) (DC:58) [16:38:12:250]: ******* RunEngine: ******* Product: {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} ******* Action: ******* CommandLine: ********** MSI (c) (DC:58) [16:38:12:250]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (DC:58) [16:38:12:250]: Grabbed execution mutex. MSI (c) (DC:58) [16:38:12:250]: Cloaking enabled. MSI (c) (DC:58) [16:38:12:250]: Attempting to enable all disabled priveleges before calling Install on Server MSI (c) (DC:58) [16:38:12:250]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (90:58) [16:38:12:265]: Grabbed execution mutex. MSI (s) (90:DC) [16:38:12:265]: Resetting cached policy values MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'Debug' is 0 MSI (s) (90:DC) [16:38:12:265]: ******* RunEngine: ******* Product: {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} ******* Action: ******* CommandLine: ********** MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (90:DC) [16:38:12:265]: Warning: Local cached package 'C:\WINDOWS\Installer\ce6d56e.msi' is missing. MSI (s) (90:DC) [16:38:12:265]: User policy value 'SearchOrder' is 'nmu' MSI (s) (90:DC) [16:38:12:265]: User policy value 'DisableMedia' is 0 MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Looking for sourcelist for product {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Adding {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Now checking product {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Media is enabled for product. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Trying source d:\2a2ac35788eea9066bae01\. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 2203 2: d:\2a2ac35788eea9066bae01\msxml6.msi 3: -2147287037 MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Processing net source list. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Processing media source list. MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Trying media source F:\. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 2203 2: F:\msxml6.msi 3: -2147287038 MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Processing URL source list. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Failed to resolve source MSI (s) (90:DC) [16:38:12:296]: MainEngineThread is returning 1612 MSI (c) (DC:58) [16:38:12:296]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (DC:58) [16:38:12:296]: MainEngineThread is returning 1612 === Verbose logging stopped: 3/28/2012 16:38:12 === When I click on the failed status for SSIS, no log file appears at all. To be honest, I'm not even sure where to start on this one - never guessed it would be so much trouble to add a component right from the disk. Any help or pointers whatsoever would be greatly appreciated. If any more details are needed, please ask - I'd be glad to add them.

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using Twain Dot Net in XBAP (Deployed via ASP.NET)

    - by Kaveh Shahbazian
    First Version: Is there a way to use Twain Dot Net in a XBAP (WPF in browser)? Second Version: I have a setup exe (installation) that puts TwainDotNet.dll and TwainDotNet.Wpf.dll on client machine and registers them in GAC (using gacutil.exe). I have also a XBAP page on my server (IIS). (The XBAP part of the project works fine locally and I am using those 2 twin libraries registered in GAC locally too. And on client machine I have registered my generated certification in Trusted Root and Publishers. I have tested my XBAP without Twin libs on the client machine(s) and it works fine; test XBAP edits a text file on client machine hard). Now; when I browse my XBAP on a client, I get : "Error getting information about the default source: Failure"; which I think happens in GetDefault of DataSource class. Is there any work around? Thanks

    Read the article

  • performing simple stack overflow on Mac os 10.6

    - by REALFREE
    I'm trying to learn about stack base overflow and write a simple code to exploit stack. But somehow it doesn't work at all but showing only Abort trap on my machine (mac os leopard) I guess Mac os treats overflow differently, it won't allow me to overwrite memory through c code. for example, strcpy(buffer, input) // lets say char buffer[6] but input is 7 bytes on Linux machine, this code successfully overwrite next stack, but prevented on mac os (Abort trap) Anyone know how to perform a simple stack-base overflow on mac machine?

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • performing simple buffer overflow on Mac os 10.6

    - by REALFREE
    I'm trying to learn about stack base overflow and write a simple code to exploit stack. But somehow it doesn't work at all but showing only Abort trap on my machine (mac os leopard) I guess Mac os treats overflow differently, it won't allow me to overwrite memory through c code. for example, strcpy(buffer, input) // lets say char buffer[6] but input is 7 bytes on Linux machine, this code successfully overwrite next stack, but prevented on mac os (Abort trap) Anyone know how to perform a simple stack-base overflow on mac machine?

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • cuda program on VMware

    - by scatman
    i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, and i need to use a linux operating system for testing. my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?

    Read the article

  • Recycle remote IIS app pool

    - by Abhijeet Patel
    I would like to use DirectoryServices to list and recycle App Pools hosted on any machine in my Workgroup. My approach is similar to some of the answers posted to this question,but in my case I'd like to do this for a remote machine running IIS 6. I'm prototyping this as a console app but will eventually be providing a web interface to allow recycling a selected app pool for a specified machine. Where can I specify the credentials to use for making Directory Services call to a remote machine. I hope I'm phrasing this correctly.

    Read the article

  • Supplementary Developer Laptop

    - by David Silva Smith
    I'm looking to buy a laptop with the following specs for a developer. The goal will be to have a development machine supplementing the devs desktop. During work hours the dev will be on a beefy desktop. For working while on the go: trains, client sites, code camps, it would be nice to have a machine which can run Visual Studio 2008 without needing to remote desktop into their primary machine. What do you think is the lowest cost laptop meeting this need? Here are the specs I have in mind: SSD drive 64GB-doesn't need to be huge, most data is stored on servers. Will need to fit Windows 7, IIS, SQL Server, and Visual Studio 2010. RAM-3GB processor =Pentium Core 2 duo Screen size = 14 inches. OS doesn't matter. It will be paved with Windows 7 Ultimate optical drive omitted would be a plus. weight and battery life aren't so important because the machine will be plugged in almost all the time.

    Read the article

  • Stack overflow in xp cmd console

    - by Dave
    I am using an older program whose source code I cannot see. I am using the cmd.exe console in windows xp. The program ran with no problems on an xp machine last year, while a stack overflow code 2000 error was observed on a different xp machine (easy fix - use the machine that works). I tried running the program on the previously working machine lately, and now am getting the same error. No changes to the os were made and I did not change the service pack version. Any ideas on how to get around this stack overflow error so I can use the program? Dosbox will at least open the program, however it does not run to completion. Thanks!

    Read the article

  • Move Files from a Failing PC with an Ubuntu Live CD

    - by Trevor Bekolay
    You’ve loaded the Ubuntu Live CD to salvage files from a failing system, but where do you store the recovered files? We’ll show you how to store them on external drives, drives on the same PC, a Windows home network, and other locations. We’ve shown you how to recover data like a forensics expert, but you can’t store recovered files back on your failed hard drive! There are lots of ways to transfer the files you access from an Ubuntu Live CD to a place that a stable Windows machine can access them. We’ll go through several methods, starting each section from the Ubuntu desktop – if you don’t yet have an Ubuntu Live CD, follow our guide to creating a bootable USB flash drive, and then our instructions for booting into Ubuntu. If your BIOS doesn’t let you boot using a USB flash drive, don’t worry, we’ve got you covered! Use a Healthy Hard Drive If your computer has more than one hard drive, or your hard drive is healthy and you’re in Ubuntu for non-recovery reasons, then accessing your hard drive is easy as pie, even if the hard drive is formatted for Windows. To access a hard drive, it must first be mounted. To mount a healthy hard drive, you just have to select it from the Places menu at the top-left of the screen. You will have to identify your hard drive by its size. Clicking on the appropriate hard drive mounts it, and opens it in a file browser. You can now move files to this hard drive by drag-and-drop or copy-and-paste, both of which are done the same way they’re done in Windows. Once a hard drive, or other external storage device, is mounted, it will show up in the /media directory. To see a list of currently mounted storage devices, navigate to /media by clicking on File System in a File Browser window, and then double-clicking on the media folder. Right now, our media folder contains links to the hard drive, which Ubuntu has assigned a terribly uninformative label, and the PLoP Boot Manager CD that is currently in the CD-ROM drive. Connect a USB Hard Drive or Flash Drive An external USB hard drive gives you the advantage of portability, and is still large enough to store an entire hard disk dump, if need be. Flash drives are also very quick and easy to connect, though they are limited in how much they can store. When you plug a USB hard drive or flash drive in, Ubuntu should automatically detect it and mount it. It may even open it in a File Browser automatically. Since it’s been mounted, you will also see it show up on the desktop, and in the /media folder. Once it’s been mounted, you can access it and store files on it like you would any other folder in Ubuntu. If, for whatever reason, it doesn’t mount automatically, click on Places in the top-left of your screen and select your USB device. If it does not show up in the Places list, then you may need to format your USB drive. To properly remove the USB drive when you’re done moving files, right click on the desktop icon or the folder in /media and select Safely Remove Drive. If you’re not given that option, then Eject or Unmount will effectively do the same thing. Connect to a Windows PC on your Local Network If you have another PC or a laptop connected through the same router (wired or wireless) then you can transfer files over the network relatively quickly. To do this, we will share one or more folders from the machine booted up with the Ubuntu Live CD over the network, letting our Windows PC grab the files contained in that folder. As an example, we’re going to share a folder on the desktop called ToShare. Right-click on the folder you want to share, and click Sharing Options. A Folder Sharing window will pop up. Check the box labeled Share this folder. A window will pop up about the sharing service. Click the Install service button. Some files will be downloaded, and then installed. When they’re done installing, you’ll be appropriately notified. You will be prompted to restart your session. Don’t worry, this won’t actually log you out, so go ahead and press the Restart session button. The Folder Sharing window returns, with Share this folder now checked. Edit the Share name if you’d like, and add checkmarks in the two checkboxes below the text fields. Click Create Share. Nautilus will ask your permission to add some permissions to the folder you want to share. Allow it to Add the permissions automatically. The folder is now shared, as evidenced by the new arrows above the folder’s icon. At this point, you are done with the Ubuntu machine. Head to your Windows PC, and open up Windows Explorer. Click on Network in the list on the left, and you should see a machine called UBUNTU in the right pane. Note: This example is shown in Windows 7; the same steps should work for Windows XP and Vista, but we have not tested them. Double-click on UBUNTU, and you will see the folder you shared earlier! As well as any other folders you’ve shared from Ubuntu. Double click on the folder you want to access, and from there, you can move the files from the machine booted with Ubuntu to your Windows PC. Upload to an Online Service There are many services online that will allow you to upload files, either temporarily or permanently. As long as you aren’t transferring an entire hard drive, these services should allow you to transfer your important files from the Ubuntu environment to any other machine with Internet access. We recommend compressing the files that you want to move, both to save a little bit of bandwidth, and to save time clicking on files, as uploading a single file will be much less work than a ton of little files. To compress one or more files or folders, select them, and then right-click on one of the members of the group. Click Compress…. Give the compressed file a suitable name, and then select a compression format. We’re using .zip because we can open it anywhere, and the compression rate is acceptable. Click Create and the compressed file will show up in the location selected in the Compress window. Dropbox If you have a Dropbox account, then you can easily upload files from the Ubuntu environment to Dropbox. There is no explicit limit on the size of file that can be uploaded to Dropbox, though a free account begins with a total limit of 2 GB of files in total. Access your account through Firefox, which can be opened by clicking on the Firefox logo to the right of the System menu at the top of the screen. Once into your account, press the Upload button on top of the main file list. Because Flash is not installed in the Live CD environment, you will have to switch to the basic uploader. Click Browse…find your compressed file, and then click Upload file. Depending on the size of the file, this could take some time. However, once the file has been uploaded, it should show up on any computer connected through Dropbox in a matter of minutes. Google Docs Google Docs allows the upload of any type of file – making it an ideal place to upload files that we want to access from another computer. While your total allocation of space varies (mine is around 7.5 GB), there is a per-file maximum of 1 GB. Log into Google Docs, and click on the Upload button at the top left of the page. Click Select files to upload and select your compressed file. For safety’s sake, uncheck the checkbox concerning converting files to Google Docs format, and then click Start upload. Go Online – Through FTP If you have access to an FTP server – perhaps through your web hosting company, or you’ve set up an FTP server on a different machine – you can easily access the FTP server in Ubuntu and transfer files. Just make sure you don’t go over your quota if you have one. You will need to know the address of the FTP server, as well as the login information. Click on Places > Connect to Server… Choose the FTP (with login) Service type, and fill in your information. Adding a bookmark is optional, but recommended. You will be asked for your password. You can choose to remember it until you logout, or indefinitely. You can now browse your FTP server just like any other folder. Drop files into the FTP server and you can retrieve them from any computer with an Internet connection and an FTP client. Conclusion While at first the Ubuntu Live CD environment may seem claustrophobic, it has a wealth of options for connecting to peripheral devices, local computers, and machines on the Internet – and this article has only scratched the surface. Whatever the storage medium, Ubuntu’s got an interface for it! Similar Articles Productive Geek Tips Backup Your Windows Live Writer SettingsMove a Window Without Clicking the Titlebar in UbuntuRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • TypeInitializeException on MVVM pattern

    - by Mohit Deshpande
    System.TypeInitializationException was unhandled Message=The type initializer for 'SmartHomeworkOrganizer.ViewModels.MainViewModel' threw an exception. Source=SmartHomeworkOrganizer TypeName=SmartHomeworkOrganizer.ViewModels.MainViewModel StackTrace: at SmartHomeworkOrganizer.ViewModels.MainViewModel..ctor() at SmartHomeworkOrganizer.App.OnStartup(Object sender, StartupEventArgs e) in C:\Users\Mohit\Documents\Visual Studio 2010\Projects\SmartHomeworkOrganizer\SmartHomeworkOrganizer\App.xaml.cs:line 21 at System.Windows.Application.OnStartup(StartupEventArgs e) at System.Windows.Application.<.ctor>b__0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at System.Windows.Threading.Dispatcher.Invoke(DispatcherPriority priority, Delegate method, Object arg) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at SmartHomeworkOrganizer.App.Main() in C:\Users\Mohit\Documents\Visual Studio 2010\Projects\SmartHomeworkOrganizer\SmartHomeworkOrganizer\obj\Debug\App.g.cs:line 0 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.ArgumentException Message=Default value type does not match type of property 'Score'. Source=WindowsBase StackTrace: at System.Windows.DependencyProperty.ValidateDefaultValueCommon(Object defaultValue, Type propertyType, String propertyName, ValidateValueCallback validateValueCallback, Boolean checkThreadAffinity) at System.Windows.DependencyProperty.ValidateMetadataDefaultValue(PropertyMetadata defaultMetadata, Type propertyType, String propertyName, ValidateValueCallback validateValueCallback) at System.Windows.DependencyProperty.RegisterCommon(String name, Type propertyType, Type ownerType, PropertyMetadata defaultMetadata, ValidateValueCallback validateValueCallback) at System.Windows.DependencyProperty.Register(String name, Type propertyType, Type ownerType, PropertyMetadata typeMetadata, ValidateValueCallback validateValueCallback) at System.Windows.DependencyProperty.Register(String name, Type propertyType, Type ownerType, PropertyMetadata typeMetadata) at SmartHomeworkOrganizer.ViewModels.MainViewModel..cctor() in C:\Users\Mohit\Documents\Visual Studio 2010\Projects\SmartHomeworkOrganizer\SmartHomeworkOrganizer\ViewModels\MainViewModel.cs:line 72 InnerException: This bit of code throws a System.ArgumentException before the TypeInitializeException. It says: "Default value type does not match type of property Score": public static readonly DependencyProperty ScoreProperty = DependencyProperty.Register("Score", typeof(float), typeof(MainViewModel), new UIPropertyMetadata(0.0)); Here is the .NET property: public float Score { get { return (float) GetValue(ScoreProperty); } set { SetValue(ScoreProperty, value); } }

    Read the article

  • H12 timeout error on Heroku

    - by snowangel
    Can anyone shed some light on what's causing this timeout error on Heroku (at 2012-07-08T08:58:33+00:00)? The docs say that it's because of some long running process. I've set config.assets.initialize_on_precompile = false in config/application.rb. EmBP-2:bc Emma$ heroku restart Restarting processes... done EmBP-2:bc Emma$ heroku logs --tail 2012-07-08T08:47:21+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.1" 200 311723 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:47:21+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.0" 200 1311615 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:51:32+00:00 heroku[slugc]: Slug compilation started 2012-07-08T08:54:05+00:00 heroku[api]: Release v145 created by [email protected] 2012-07-08T08:54:05+00:00 heroku[api]: Deploy 8814b2f by [email protected] 2012-07-08T08:54:05+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:54:06+00:00 heroku[slugc]: Slug compilation finished 2012-07-08T08:54:09+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 22429 -c ./config/unicorn.rb` 2012-07-08T08:54:10+00:00 app[worker.1]: [Worker(host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1)] Exiting... 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.320616 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376765 #1] INFO -- : master complete 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376272 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011695 #1] INFO -- : worker=0 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011386 #1] INFO -- : listening on addr=0.0.0.0:22429 fd=3 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.017917 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.019309 #1] INFO -- : master process ready 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.018250 #5] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.016768 #1] INFO -- : worker=1 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020863 #8] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020617 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:54:12+00:00 app[worker.1]: SQL (2.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1') 2012-07-08T08:54:12+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:54:13+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:54:14+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:54:20+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:54:20+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:47+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.467693 #8] INFO -- : worker=1 ready 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.823800 #5] INFO -- : worker=0 ready 2012-07-08T08:54:48+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:54:48+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:54:48+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1 2012-07-08T08:54:48+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:49+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Starting job worker 2012-07-08T08:57:54+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:57:56+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047386 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047753 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047999 #1] INFO -- : master complete 2012-07-08T08:57:57+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:58+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:57:58+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Exiting... 2012-07-08T08:57:59+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 29766 -c ./config/unicorn.rb` 2012-07-08T08:58:01+00:00 app[worker.1]: SQL (27.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1') 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070527 #1] INFO -- : listening on addr=0.0.0.0:29766 fd=3 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070782 #1] INFO -- : worker=0 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.074498 #1] INFO -- : worker=1 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.075702 #1] INFO -- : master process ready 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076732 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076957 #5] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089022 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089299 #8] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:58:02+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:58:10+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:58:11+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:33+00:00 heroku[router]: Error H12 (Request timeout) -> GET codicology.co.uk/ dyno=web.1 queue= wait= service=30000ms status=503 bytes=0 2012-07-08T08:58:33+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.0" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:33+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.1" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:42+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:58:42+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1 2012-07-08T08:58:42+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:58:42+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:58:43+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] Starting job worker 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:24+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:25+00:00 app[web.1]: I, [2012-07-08T08:59:25.555052 #5] INFO -- : worker=0 ready 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: Started GET "/" for 82.69.50.215 at 2012-07-08 08:59:25 +0000 2012-07-08T08:59:26+00:00 app[web.1]: Processing by PagesController#home as HTML 2012-07-08T08:59:26+00:00 app[web.1]: I, [2012-07-08T08:59:26.043501 #8] INFO -- : worker=1 ready 2012-07-08T08:59:26+00:00 app[web.1]: Rendered pages/home.html.haml within layouts/application (5.7ms) 2012-07-08T08:59:26+00:00 app[web.1]: (1.1ms) SELECT COUNT(*) FROM "delayed_jobs" 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_header.html.erb (4.2ms) 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_footer.html.haml (1.4ms) 2012-07-08T08:59:26+00:00 app[web.1]: Completed 200 OK in 326ms (Views: 258.4ms | ActiveRecord: 65.2ms)

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >