Search Results

Search found 1751 results on 71 pages for 'builds'.

Page 57/71 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Need some advice on MVC separation..

    - by Zenph
    I should note I am using Zend Framework. Although this shouldn't affect the concrete answer, it does mean there are several places I can implement my following method (action helper, controller etc). The issue is I have buildOptions() and parseOptions() method which takes $_GET/$_POST variables based on a 'tag' and builds rules which are then used in a select query. An example would be ?modelSort=id&modelOrder=asc The 'model' in the above obviously relates to the particular model, and it used as a 'tag' so that I can for example also have model2Sort and model2Order so there is no conflict between parameters. However, the trouble I am having now is where should these methods go? They are generally dealing with request params. I have been reading a lot about fat model, thin controller. Should this be in an abstract model. My thinking was that if it were, I would do something like: (note, I know I wouldn't call directly like this. Method would be used by child classes) $abstractModel-buildOptions($params); Where 'params' could be anything, like the request parameters $_GET or $_POST: $abstractModel-buildOptions($_GET); Now from what I can see the model is not inherintly dealing with request variables but rather parameters passed to the method. Advice? Where does this method belong? Model, Controller? Specifically on Zend, should it be an action helper, plugin, within an abstract model? Appreciate any advice.

    Read the article

  • Why null reference exception in SetMolePublicInstance?

    - by OldGrantonian
    I get a "null reference" exception in the following line: MoleRuntime.SetMolePublicInstance(stub, receiverType, objReceiver, name, null); The program builds and compiles correctly. There are no complaints about any of the parameters to the method. Here's the specification of SetMolePublicInstance, from the object browser: SetMolePublicInstance(System.Delegate _stub, System.Type receiverType, object _receiver, string name, params System.Type[] parameterTypes) Here are the parameter values for "Locals": + stub {Method = {System.String <StaticMethodUnitTestWithDeq>b__0()}} System.Func<string> + receiverType {Name = "OrigValue" FullName = "OrigValueP.OrigValue"} System.Type {System.RuntimeType} objReceiver {OrigValueP.OrigValue} object {OrigValueP.OrigValue} name "TestString" string parameterTypes null object[] I know that TestString() takes no parameters and returns string, so as a starter to try to get things working, I specified "null" for the final parameter to SetMolePublicInstance. As already mentioned, this compiles OK. Here's the stack trace: Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.ExtendedReflection.Collections.Indexable.ConvertAllToArray[TInput,TOutput](TInput[] array, Converter`2 converter) at Microsoft.Moles.Framework.Moles.MoleRuntime.SetMole(Delegate _stub, Type receiverType, Object _receiver, String name, MoleBindingFlags flags, Type[] parameterTypes) at Microsoft.Moles.Framework.Moles.MoleRuntime.SetMolePublicInstance(Delegate _stub, Type receiverType, Object _receiver, String name, Type[] parameterTypes) at DeqP.Deq.Replace[T](Func`1 stub, Type receiverType, Object objReceiver, String name) in C:\0VisProjects\DecP_04\DecP\DeqC.cs:line 38 at DeqPTest.DecCTest.StaticMethodUnitTestWithDeq() in C:\0VisProjects\DecP_04\DecPTest\DeqCTest.cs:line 28 at Starter.Start.Main(String[] args) in C:\0VisProjects\DecP_04\Starter\Starter.cs:line 14 Press any key to continue . . . To avoid the null parameter, I changed the final "null" to "parameterTypes" as in the following line: MoleRuntime.SetMolePublicInstance(stub, receiverType, objReceiver, name, parameterTypes); I then tried each of the following (before the line): int[] parameterTypes = null; // if this is null, I don't think the type will matter int[] parameterTypes = new int[0]; object[] parameterTypes = new object[0]; // this would allow for various parameter types All three attempts produce a red squiggly line under the entire line for SetMolePublicInstance Mouseover showed the following message: The best overloaded method match for 'Microsoft.Moles.Framework.Moles.MoleRuntime.SetMolePublicInstance(System.Delegate, System.Type, object, string, params System.Type[])' has some invalid arguments. I'm assuming that the first four arguments are OK, and that the problem is with the params array.

    Read the article

  • Why Can't Businesses Upgrade their Browsers from IE6/IE7?

    - by viatropos
    I have read lots these past few weeks on IE6, seeing if it was really that bad to make it look right. I have just learned HTML and CSS this past year so I've been spoiled to start with basically CSS3 and HTML5, and I can do some really cool stuff super fast. I'm no IE6 master and I don't have years of experience with IE. So I thought it'd take a little time to figure out all the hacks to IE6/7 discovered and just implement them. But it's way harder than that (or maybe just way too much work). I'd have to either completely rebuild my design using "Internet Explorer 'Principles'", or cut out a lot of the neat things I could do using more recent technologies. For a million and one other reasons, everyone who builds things online seems to think IE should die. My question is, why can't businesses upgrade their browsers? When I work with businesses, they almost always resist the first time I ask, but 5 seconds later I'll show them what it looks like on my computer and talk about how great the latest stuff is (how much more secure later browser are, all the famous IE security cases, how much smoother and faster they new browsers are, how the IE team has basically missed the boat entirely, how much smoother business processes run, etc.), and they get excited! And within a few seconds they're up and running with Chrome or something. So can businesses not upgrade for some reasons? What are the reasons a business cannot upgrade? The main reason I think of is because they have an old version of windows. But a) wasn't there a legal case against this? and b) somebody must have figured out how to install Chrome or Firefox on ancient versions of Windows by now.

    Read the article

  • make target is never determined up to date

    - by Michael
    Cygwin make always processing $(chrome_jar_file) target, after first successful build. So I never get up to date message and always see commands for $(chrome_jar_file) are executing. However it happens only on Windows 7. On Windows XP once it built and intact, no more builds. I narrowed down the issue to one prerequisite - $(jar_target_dir). Here is part of the code # The location where the JAR file will be created. jar_target_dir := $(build_dir)/chrome # The main chrome JAR file. chrome_jar_file := $(jar_target_dir)/$(extension_name).jar # The root of the JAR sources. jar_source_root := chrome # The sources for the JAR file. jar_sources := bla #... some files, doesn't matter jar_sources_no_dir := $(subst $(jar_source_root)/,,$(jar_sources)) $(chrome_jar_file): $(jar_sources) $(jar_target_dir) @echo "Creating chrome JAR file." @cd $(jar_source_root); $(ZIP) ../$(chrome_jar_file) $(jar_sources_no_dir) @echo "Creating chrome JAR file. Done!" $(jar_target_dir): $(build_dir) echo "Creating jar target dir..." if [ ! -x $(jar_target_dir) ]; \ then \ mkdir $(jar_target_dir); \ fi $(build_dir): @if [ ! -x $(build_dir) ]; \ then \ mkdir $(build_dir); \ fi so if I just remove $(jar_target_dir) from $(chrome_jar_file) rule, it works fine.

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • Dealloc'd Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • How to determine if a target will be executed?

    - by Scott Langham
    Hi, I'm writing an msbuild file and have something like this: <ValidateDependsOn>$(ValidateDependsOn);ValidateA</ValidateDependsOn> <ValidateDependsOn>$(ValidateDependsOn);ValidateB</ValidateDependsOn> <Target Name="BuildA"> <!-- stuff --> </Target> <Target Name="BuildB"> <!-- stuff --> </Target> <Target Name="ValidateA"> <Error /> <!-- check properties and machine environment are suitable to run BuildA --> </Target> <Target Name="ValidateB"> <Error /> <!-- check properties and machine environment are suitable to run BuildB --> </Target> Builds can take a while. Originally we had the Build steps depending on the Validate steps, but sometimes a validate step wouldn't run until the middle of the build and you would have wasted time getting there. So, we moved the validate steps to the start by using the ValidateDependsOn pattern to insert the targets to run up front. The problem now though is that sometimes during a build BuildB may not actually run, and this means I don't need and in fact, don't want ValidateB to run. Is there any way I can selectively update ValidateDependsOn by conditionally knowing which targets will actually be run? I'm looking for something equivalent to: <ValidateDependsOn Condition="TargetWillRun(BuildB)">$(ValidateDependsOn);ValidateB</ValidateDependsOn>

    Read the article

  • How To Include Transitive Dependencies

    - by Brad Rhoads
    I have 2 gradle projects: an Android app and a RoboSpock test. My build.gradle for the Android app has . . . dependencies { compile fileTree(dir: 'libs', include: '*.jar') compile ('com.actionbarsherlock:actionbarsherlock:4.4.0@aar') { exclude module: 'support-v4' } } . . . and builds correctly by itself, e.g assembleRelease works. I'm stuck getting the test to work. I gets lots of errors such as: package com.google.zxing does not exist Those seem to indicate that the .jar files aren't being picked up. Here's my build.gradle for the test project: buildscript { repositories { mavenLocal() mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.9.+' classpath 'org.robospock:robospock-plugin:0.4.0' } } repositories { mavenLocal() mavenCentral() } apply plugin: 'groovy' dependencies { compile "org.codehaus.groovy:groovy-all:1.8.6" compile 'org.robospock:robospock:0.4.4' } dependencies { compile fileTree(dir: ':android:libs', include: '*.jar') compile (project(':estanteApp')) { transitive = true } } sourceSets.test.java.srcDirs = ['../android/src/', '../android/build/source/r/debug'] test { testLogging { lifecycle { exceptionFormat "full" } } } project.ext { robospock = ":estanteApp" // project to test } apply plugin: 'robospock' As that shows, I've tried adding transitive = true and including the .jar files explicitly. But no matter what I try, I end up with the package does not exist error.

    Read the article

  • "This program might not have installed correctly" message in Windows 7 RC

    - by kliu
    I have an installer that works perfectly under NT 5.x, Vista, and Windows 7. It contains the proper manifest for UAC on NT 6.x. But starting with Windows 7 RC, every time the setup program closes, Windows produces an erroneous "This program might not have installed correctly" message, even though the program did install correctly with no problems whatsoever. I never got these spurious messages in Vista or in Windows 7 beta. I sent a bug report to Microsoft, but have not heard back. I thought that this might just be a glitch in the Windows 7 RC, but the problem is still there on a fresh install of one of the very recent RTM-escrow builds that was leaked. Microsoft has no documentation whatsoever about this--not even a hint to what might possibly be triggering it. Even more frustrating is that I get this "This program might not have installed correctly" message even if I cancel the install on the very first are-you-sure-you-want-to-proceed screen before any of the installation code (creating a temp dir, extracting files, copying, registry, etc.) is ever run. Has anyone figured this one out?

    Read the article

  • Basic C# problem

    - by Juan
    Determine if all the digits of the sum of n -numbers and swapped n are odd. For example: 36 + 63 = 99, y 409 + 904 = 1313. Visual Studio builds my code, there is still something wrong with it ( it doesnt return an answer) can you please help me here? using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { long num = Convert.ToInt64(Console.Read()); long vol = voltea(num); long sum = num + vol; bool simp = simpares(sum); if (simp == true) Console.Write("Si"); else Console.Write("No"); } static private bool simpares(long x) { bool s = false; long [] arreglo = new long [1000]; while ( x > 0) { arreglo [x % 10] ++; x /=10; } for (long i=0 ; i <= arreglo.Length ; i++) { if (arreglo [i]%2 != 0) s = true; } return s; } static private long voltea(long x) { long v = 0; while (v > 0) { v = 10 * v + x % 10; x /= 10; } return v; } } }

    Read the article

  • Confused about std::runtime_error vs. std::logic_error

    - by David Gladfelter
    I recently saw that the boost program_options library throws a logic_error if the command-line input was un-parsable. That challenged my assumptions about logic_error vs. runtime_error. I assumed that logic errors (logic_error and its derived classes) were problems that resulted from internal failures to adhere to program invariants, often in the form of illegal arguments to internal API's. In that sense they are largely equivalent to ASSERT's, but meant to be used in released code (unlike ASSERT's which are not usually compiled into released code.) They are useful in situations where it is infeasible to integrate separate software components in debug/test builds or the consequences of a failure are such that it is important to give runtime feedback about the invalid invariant condition to the user. Similarly, I thought that runtime_errors resulted exclusively from runtime conditions outside of the control of the programmer: I/O errors, invalid user input, etc. However, program_options is obviously heavily (primarily?) used as a means of parsing end-user input, so under my mental model it certainly should throw a runtime_error in the case of bad input. Where am I going wrong? Do you agree with the boost model of exception typing?

    Read the article

  • MSBuild - How to build a .NET solution file (in an XML task script) from pre-written command line commands

    - by Devtron
    Hello. I have been studying MSBuild as I have the need to automate my development shop's builds. I was able to easily write a .BAT file that invokes the VS command prompt and passes my MSBuild commands to it. This works rather well and is kinda nifty. Here is the contents of my .BAT build file: call "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\amd64\vcvars64.bat" cd C:\Sandbox\Solution msbuild MyTopSecretApplication.sln /p:OutputPath=c:\TESTMSBUILDOUTPUT /p:Configuration=Release,Platform=x86 pause ^ This works well but I now have the need to use the MSBuild task for TeamCity CI. I have tried to write a few MSBuild scripts but I cannot get them to work the same. What is the equivalent build script to the command I am using in my .BAT file? Any ideas? I have tried using something like this, but no success (I know this is wrong): <?xml version="1.0"?> <project name="Hello Build World" default="run" basedir="."> <target name="build"> <mkdir dir="mybin" /> <echo>Made mybin directory!</echo> <csc target="exe" output="c:\TESTMSBUILDOUTPUT"> <sources> <include name="MyTopSecretApplication.sln"/> </sources> </csc> <echo>MyTopSecretApplication.exe was built!</echo> </target> <target name="clean"> <delete dir="mybin" failonerror="false"/> </target> <target name="run" depends="build"> <exec program="mybin\MyTopSecretApplication.exe"/> </target> What I simply need is an MSBuild XML build script that compiles a single solution for Release mode to a specified output directory. Any help?

    Read the article

  • Top n items in a List ( including duplicates )

    - by Krishnan
    Trying to find an efficient way to obtain the top N items in a very large list, possibly containing duplicates. I first tried sorting & slicing, which works. But this seems unnnecessary. You shouldn't need to sort a very large list if you just want the top 20 members. So I wrote a recursive routine which builds the top-n list. This also works, but is very much slower than the non-recursive one! Question: Which is my second routine (elite2) so much slower than elite, and how do I make it faster ? My code is attached below. Thanks. import scala.collection.SeqView import scala.math.min object X { def elite(s: SeqView[Int, List[Int]], k:Int):List[Int] = { s.sorted.reverse.force.slice(0,min(k,s.size)) } def elite2(s: SeqView[Int, List[Int]], k:Int, s2:List[Int]=Nil):List[Int] = { if( k == 0 || s.size == 0) s2.reverse else { val m = s.max val parts = s.force.partition(_==m) val whole = if( parts._1.size > 1) parts._1.tail:::parts._2 else parts._2 elite2( whole.view, k-1, m::s2 ) } } def main(args:Array[String]) = { val N = 1000000/3 val x = List(N to 1 by -1).flatten.map(x=>List(x,x,x)).flatten.view println(elite2(x,20)) println(elite(x,20)) } }

    Read the article

  • jQuery.getJSON: how to avoid requesting the json-file on every refresh? (caching)

    - by Mr. Bombastic
    in this example you can see a generated HTML-list. On every refresh the script requests the data-file (ajax/test.json) and builds the list again. The generated file "ajax/test.json" is cached statically. But how can I avoid requesting this file on every refresh? // source: jquery.com $.getJSON('ajax/test.json', function(data) { var items = []; $.each(data, function(key, val) { items.push('<li id="' + key + '">' + val + '</li>'); }); $('<ul/>', { 'class': 'my-new-list', html: items. }).appendTo('body'); }); This doesn't work: list_data = $.cookie("list_data"); if (list_data == undefined || list_data == "") { $.getJSON('ajax/test.json', function(data) { list_data = data; }); } var items = []; $.each(data, function(key, val) { items.push('<li id="' + key + '">' + val + '</li>'); }); $('<ul/>', { 'class': 'my-new-list', html: items. }).appendTo('body'); Thanks in advance!

    Read the article

  • C++ map performance - Linux (30 sec) vs Windows (30 mins) !!!

    - by sonofdelphi
    I need to process a list of files. The processing action should not be repeated for the same file. The code I am using for this is - using namespace std; vector<File*> gInputFileList; //Can contain duplicates, File has member sFilename map<string, File*> gProcessedFileList; //Using map to avoid linear search costs void processFile(File* pFile) { File* pProcessedFile = gProcessedFileList[pFile->sFilename]; if(pProcessedFile != NULL) return; //Already processed foo(pFile); //foo() is the action to do for each file gProcessedFileList[pFile->sFilename] = pFile; } void main() { size_t n= gInputFileList.size(); //Using array syntax (iterator syntax also gives identical performance) for(size_t i=0; i<n; i++){ processFile(gInputFileList[i]); } } The code works correctly, but... My problem is that when the input size is 1000, it takes 30 minutes - HALF AN HOUR - on Windows/Visual Studio 2008 Express (both Debug and Release builds). For the same input, it takes only 40 seconds to run on Linux/gcc! What could be the problem? The action foo() takes only a very short time to execute, when used separately. Should I be using something like vector::reserve for the map?

    Read the article

  • Store Business Rules in XML Document, Validate afterwards in Java, how?

    - by JavaPete
    Example XML Rules document: <user> <username> <not-null/> <capitals value="false"/> <max-length value="15"/> </username> <email> <not-null/> <isEmail/> <max-length value="40"/> </email> </user> How do I implement this? I'm starting from scratch, what I currently have is a User-class, and a UserController which saves the User object in de DB (through a Service-layer and Dao-layer), basic Spring MVC. I can't use Spring MVC Validation however in our Model-classes, I have to use an XML document so an Admin can change the rules I think I need a pattern which dynamically builds an algorithm based on what is provided by the XML Rules document, but I can't seem to think of anything other than a massive amount of if-statements. I also have nothing for the parsing yet and I'm not sure how I'm gonna (de)couple it from the actual implementation of the validation-process.

    Read the article

  • SCons and dependencies for python function generating source

    - by elmo
    I have an input file data, a python function parse and a template. What I am trying to do is use parse function to get dictionary out of data and use that to replace fields in template. Now to make this a bit more generic (I perform the same action in few places) I have defined a custom function to do so. Below is definition of custom builder and values is a dictionary with { 'name': (data_file, parse_function) } (you don't really need to read through this, I simply put it here for completeness). def TOOL_ADD_FILL_TEMPLATE(env): def FillTemplate(env, output, template, values): out = output[0] subs = {} for name, (node, process) in values.iteritems(): def Process(env, target, source): with open( env.GetBuildPath(target[0]), 'w') as out: out.write( process( source[0] ) ) builder = env.Builder( action = Process ) subs[name] = builder( env, env.GetBuildPath(output[0])+'_'+name+'_processed.cpp', node )[0] def Fill(env, target, source): values = dict( (name, n.get_contents()) for name, n in subs.iteritems() ) contents = template[0].get_contents().format( **values ) open( env.GetBuildPath(target[0]), 'w').write( contents ) builder = env.Builder( action = Fill ) builder( env, output[0], template + subs.values() ) return output env.Append(BUILDERS = {'FillTemplate': FillTemplate}) It works fine when it comes to checking if data or template changed. If it did it rebuilds the output. It even works if I edit process function directly. However if my process function looks like this: def process( node ): return subprocess(node) and I edit subprocess the change goes unnoticed. Is there any way to get correct builds without making process functions being always invoked?

    Read the article

  • What to set the scalar type to contain a byte []. Entity in MVC2

    - by Brad8118
    I'm trying out the EF 4.0 and using the Model first approach. I'd like to store images into the database and I'm not sure of the best type for the scalar in the entity. I currently have it(the image scalar type) setup as a binary. From what I have been reading the best way to store the image in the db is a byte[]. So I'm assuming that binary is the way to go. If there is a better way I'd switch. In my controller I have: //file from client to store in the db HttpPostedFileBase file = Request.Files[inputTagName]; if (file.ContentLength > 0) { keyToAdd.Image = new byte[file.ContentLength]; file.InputStream.Write(keyToAdd.Image, 0, file.ContentLength); } This builds fine but when I run it I get an exception writing the stream to keyToAdd.Image. The exception is something like: Method does not exist. Any ideas? Note that when using a EF 4.0 model first approach I only have int16, int32, double, string, decimal, binary, byte, DateTime, Double, Single, and SByte as available types. Thanks

    Read the article

  • Ajax request ERROR on IE

    - by tinti
    Hello all! I have a small problem on IE browser (actually on Google Chrome too) I have this js code function createDoc(url) { var xhttp = ajaxRequest(); var currentLocationBase = window.location.href; currentLocationBase = currentLocationBase.substr(0,currentLocationBase.lastIndexOf("/") + 1); var u = currentLocationBase + url; xhttp.open("GET", u, false); xhttp.send(null); var xml = xhttp.responseXML; return xml; } /** * Builds an AJAX reques handler. * * @return The handler. */ function ajaxRequest() { var xhttp = null; if (window.XMLHttpRequest) { xhttp = new XMLHttpRequest(); } else if (window.ActiveXObject){ // Internet Explorer 5/6 xhttp = new ActiveXObject("Microsoft.XMLHTTP"); } else { } return xhttp; } In Firefox this code works great, but not in IE and Google Chrome Seems that the error is given at the line xhttp.open("GET", u, false); Can anyone help me to understand what i'm doing wrong? Thanks

    Read the article

  • Using Visual Studio to make non aspx code-behind pages

    - by rizzle
    I want to build my own "code behind" like pages so that i can have HTML in a HTML file and code in cs file but be able to have Intellesense for the tokens in the HTML file. (i know that's what the .NET page class does, but i want to have something much lighter) EG: in the .html file: <%@ Directive classname="HTMLSnippet" %> <html> <body> <div>[%message%] </body> </html> and in a .cs file public class MyClass : HTMLSnippet { public class MyClass () { snippet.message = "message goes here" } } So my question is how do make the HTMLSnippet class so that it's members are automatically created, and specifically show up in Intellesense as i add tokens to the .html file? I know that .net currently does it by creating the designer.cs file and basically builds a class with all the elements from the page as it goes, and that would work fine but how can i get visual studio to generate that before compiling so that it shows up in Intellesense. Thanks! Clarification I'm not using this as a handler yet, i want to use this to have HTML snippets with tokens be usable in code as an object with properties. so almost like a custom control. I think what i have to do is create a VS add-in that waits for me to type tokens into an .html file then it automatically creates a .cs file with members for each token.

    Read the article

  • How to make cakePHP retreive the data represented by a foreign key?

    - by XL
    Greetings cake experts, I have a question that I think would really help a lot of people getting started with cakePHP. I have a feeling it will be easy for some of you, but it is quite challenging to me. I have a simple database with multiple tables. I can't figure out how to make cakePHP display the values associated with a foreign key in an index view. Or create a view where the fields of my choice (the ones that make sense to users like location name - not location_id can be updated or viewed on a single page). I have created an example at http://lovecats.cakeapp.com that illustrate the question. If you look at the page and click the "list cats", you will notice that it shows the location_id field from the locations table. You will also notice that when you click "add cats", you must choose a location_id from the locations table. This is the automagic way that cakePHP builds the app. I want this to be the field location_name. The database is setup so that the table cats has a foreign key called location_id that has a relationship to a table called locations. This is my problem: I want these pages to display the location_name instead of the location_id. If you want to login to the application, you can go to http://cakeapp.com/sqldesigners/sql/lovecats and the password 'password' to look at the db relationships, etc. How do I have a page that shows the fields that I want? And is it possible to create a page that updates fields from all of the tables at once? This is the slice of cake that I have been trying to figure out and this would REALLY get me over a hump. You can download the app and sql from the above url.

    Read the article

  • Cygwin make always processing target

    - by Michael
    However it happens only on Windows 7. On Windows XP once it built and intact, no more builds. I narrowed down the issue to one prerequisite - $(jar_target_dir). Here is part of the code # The location where the JAR file will be created. jar_target_dir := $(build_dir)/chrome # The main chrome JAR file. chrome_jar_file := $(jar_target_dir)/$(extension_name).jar # The root of the JAR sources. jar_source_root := chrome # The sources for the JAR file. jar_sources := bla #... some files, doesn't matter jar_sources_no_dir := $(subst $(jar_source_root)/,,$(jar_sources)) $(chrome_jar_file): $(jar_sources) $(jar_target_dir) @echo "Creating chrome JAR file." @cd $(jar_source_root); $(ZIP) ../$(chrome_jar_file) $(jar_sources_no_dir) @echo "Creating chrome JAR file. Done!" $(jar_target_dir): $(build_dir) echo "Creating jar target dir..." if [ ! -x $(jar_target_dir) ]; \ then \ mkdir $(jar_target_dir); \ fi $(build_dir): @if [ ! -x $(build_dir) ]; \ then \ mkdir $(build_dir); \ fi so if I just remove $(jar_target_dir) from $(chrome_jar_file) rule, it works fine.

    Read the article

  • `svn checkout` on the SVN server causes the repo to break with a 301 error

    - by Phillip Oldham
    We have an nginx server which proxies to a standard set-up of Apache+SVN. The nginx set-up is a very simple proxy: server { server_name svn.ourdomain.tld; location / { proxy_pass http://localhost:8080; } } Apache is set-up as follows: <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthName "Authentication Required" AuthUserFile /var/svn/.auth Require valid-user </Location> ...which allows us to access repositories using something like http://svn.ourdomain.tld/repo. We've been running this set-up now for about 2 years without issue. Recently we've found that we need to check out one of the repositories onto the server itself, however whenever we do so it seems to break the repo. From that point on, it will only respond with a 301 Moved Permanently error. We've tried: svn co file:///path/to/repo svn co svn://localhost/repo svn co svn://svn.ourdomain.tld/repo svn co svn+ssh://localhost/repo svn co svn+ssh://svn.ourdomain.tld/repo svn co http://localhost/repo svn co http://svn.ourdomain.tld/repo Also tried bypassing nginx, and get the same error: svn co http://localhost:8080/repo svn co http://svn.ourdomain.tld:8080/repo Checking out from a different machine works as expected until we attempt to check out on the server, after that it refuses with the same 301 error. What is more confusing is that this repository server also hosts our HudsonCI server, which can pulls and builds our projects hourly. This leads us to suspect that it's the svn client which is causing an error in communication. Its also very confusing that removing then re-creating the repo using svnadmin doesn't reset the error - the repo is still unavailable even though it's "new"! Restarting apache and subversion (svnserve) has no effect on this, or the original error. Version information: OS: 64-bit CentOS 4.2, 2.6.27 kernel svn client: 1.4.2 (same for both server and remote clients) svn server: 1.4.2 httpd: 2.2.3 UPDATE: This also happens with svn export when run on the repo server. Ran from any other box/client, there isn't a problem. Here's the workflow, to help clarify the error: [~repo-server~]# svnadmin create {repo}; chown -Rf www:www {repo} [remote-client]# svn checkout http://svn.ourdomain.tld/repo [remote-client]# svn add file; svn ci -m '' [~repo-server~]# cd /var/www; svn export file:///path/to/repo/trunk ourproject [remote-client]# svn update fails with 301 error I can also confirm that the hostname of the box doesn't have an effect here, which is very odd: whether or not svn.ourdomain.tld is added to /etc/hosts it still breaks - we thought it could be an issue with localhost routing, but that doesn't seem to be the case. Are we missing something in the documentation which states you can't checkout a repo when the server is on the same box? How can we stop the repos becoming corrupt when we checkout locally?

    Read the article

  • New Computer Build Questions

    - by MJ
    I'm in the process of gathering parts and specs for a new machine. I wear many hats, so the machine needs to do a lot. I need at least 2 monitor support, if not three. I also play many online MMOs (wow, aion, war hammer, etc), along with some freelance programming projects. I already have a case which is very large, so it will fit anything. I have 2 other SATA HDs. They are more for storage and basic programs. I feel that the best improvement could be done with a solid state HD, true or not? I'm more of a software/programming guy, so ANY input at all on improving this system build would be appreciated. I have a few questions with this list. AMD or Intel? I don't know enough about either to choose what would best fit me. Thanks! **EDIT: Thanks for the input everyone! Here are some answers: I do a lot of programming and gaming, so I do need things for both. The newer video card covers the gaming aspect, as well as allowing me to have many monitors. (hopefully upgrade to dual 30' or more) I don't need any additional HDs at this time. I have a SATA 160g and 120g from my previous computer, and a NAS system with over 2TB of storage on the homenetwork. I just wanted a fast HD for OS/programs/games. With the memory. I have used G.SKILL before in 2 system builds. It's done excellent for me in them. Very stable. **EDIT2: Made some additional changes. Lowered the power supply down to 750, which saves me more $$. Also changed the SSD to 2 WD 650G HDs. Thinking of doing a CPU upgrade to the 3.4GHZ AMD Phenom II X4 965 Black Edition Deneb 3.4GHz System Specs - Budget:$1500 CPU: AMD Phenom II X4 955 Black Edition Deneb 3.2GHz MB: GIGABYTE GA-MA790GPT-UD3H AM3 AMD 790GX HDMI ATX Memory: G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 1066 Video: DIAMOND 5870PE51G Radeon HD 5870 (Cypress XT) 1GB 256-bit GD Power Supply: XCLIO GREATPOWER 1000W ATX12V SLI Ready CrossFire Ready HD:Intel X25-M Mainstream SSDSA2MH080G2C1 2.5" 80GB SATA II MLC Changes: Power Supply: CORSAIR CMPSU-750TX 750W ATX12V / EPS12V HD: 2x Western Digital Caviar Blue WD6400AAKS 640GB CPU: AMD Phenom II X4 965 Black Edition Deneb 3.4GHz

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >