Search Results

Search found 13727 results on 550 pages for 'target platform'.

Page 464/550 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Error copying file from app bundle

    - by Michael Chen
    I used the FireFox add-on SQLite Manager, created a database, which saved to my desktop as "DB.sqlite". I copied the file into my supporting files for the project. But when I run the app, immediately I get the error "Assertion failure in -[AppDelegate copyDatabaseIfNeeded], /Users/Mac/Desktop/Note/Note/AppDelegate.m:32 2014-08-19 23:38:02.830 Note[28309:60b] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Failed to create writable database file with message 'The operation couldn’t be completed. (Cocoa error 4.)'.' First throw call stack: "... Here is the App Delegate Code where the error takes place -(void) copyDatabaseIfNeeded { NSFileManager *fileManager = [NSFileManager defaultManager]; NSError *error; NSString *dbPath = [self getDBPath]; BOOL success = [fileManager fileExistsAtPath:dbPath]; if (!success) { NSString *defaultDBPath = [[ [NSBundle mainBundle ] resourcePath] stringByAppendingPathComponent:@"DB.sqlite"]; success = [fileManager copyItemAtPath:defaultDBPath toPath:dbPath error:&error]; if (!success) NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]); } } I am very new to Sqlite, so I maybe I didn't create a database correctly in the FireFox Sqlite manager, or maybe I didn't "properly" copy the .sqlite file in? (I did check the target membership in the sqlite and it correctly has my project selected. Also, the .sqlite file names all match up perfectly.)

    Read the article

  • Changing default compiler in Linux, using SCons

    - by ereOn
    On my Linux platform, I have several versions of gcc. Under usr/bin I have: gcc34 gcc44 gcc Here are some outputs: $ gcc --version gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48) $ gcc44 --version gcc44 (GCC) 4.4.0 20090514 (Red Hat 4.4.0-6) I need to use the 4.4 version of gcc however the default seems to the 4.1 one. I there a way to replace /usr/bin/gcc and make gcc44 the default compiler not using a symlink to /usr/bin/gcc44 ? The reason why I can't use a symlink is because my code will have to be shipped in a RPM package using mock. mock creates a minimal linux installation from scratch and just install the specified dependencies before compiling my code in it. I cannot customize this "minimal installation". Ideally, the perfect solution would be to install an official RPM package that replaces gcc with gcc44 as the default compiler. Is there such a package ? Is this even possible/good ? Additional information I have to use SCons (a make alternative) and it doesn't let me specify the binary to use for gcc. I will also accept any answer that will tell me how to specify the gcc binary in my SConstruct file.

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • N-gram split function for string similarity comparison

    - by Michael
    As part of excersise to better understand F# which I am currently learning , I wrote function to split given string into n-grams. 1) I would like to receive feedback about my function : can this be written simpler or in more efficient way? 2) My overall goal is to write function that returns string similarity (on 0.0 .. 1.0 scale) based on n-gram similarity; Does this approach works well for short strings comparisons , or can this method reliably be used to compare large strings (like articles for example). 3) I am aware of the fact that n-gram comparisons ignore context of two strings. What method would you suggest to accomplish my goal? //s:string - target string to split into n-grams //n:int - n-gram size to split string into let ngram_split (s:string, n:int) = let ngram_count = s.Length - (s.Length % n) let ngram_list = List.init ngram_count (fun i -> if( i + n >= s.Length ) then s.Substring(i,s.Length - i) + String.init ((i + n) - s.Length) (fun i -> "#") else s.Substring(i,n) ) let ngram_array_unique = ngram_list |> Seq.ofList |> Seq.distinct |> Array.ofSeq //produce tuples of ngrams (ngram string,how much occurrences in original string) Seq.init ngram_array_unique.Length (fun i -> (ngram_array_unique.[i], ngram_list |> List.filter(fun item -> item = ngram_array_unique.[i]) |> List.length) )

    Read the article

  • how to elegantly duplicate a graph (neural network)

    - by macias
    I have a graph (network) which consists of layers, which contains nodes (neurons). I would like to write a procedure to duplicate entire graph in most elegant way possible -- i.e. with minimal or no overhead added to the structure of the node or layer. Or yet in other words -- the procedure could be complex, but the complexity should not "leak" to structures. They should be no complex just because they are copyable. I wrote the code in C#, so far it looks like this: neuron has additional field -- copy_of which is pointer the the neuron which base copied from, this is my additional overhead neuron has parameterless method Clone() neuron has method Reconnect() -- which exchanges connection from "source" neuron (parameter) to "target" neuron (parameter) layer has parameterless method Clone() -- it simply call Clone() for all neurons network has parameterless method Clone() -- it calls Clone() for every layer and then it iterates over all neurons and creates mappings neuron=copy_of and then calls Reconnect to exchange all the "wiring" I hope my approach is clear. The question is -- is there more elegant method, I particularly don't like keeping extra pointer in neuron class just in case of being copied! I would like to gather the data in one point (network's Clone) and then dispose it completely (Clone method cannot have an argument though).

    Read the article

  • how to clear XFixes regions

    - by ~buratinas
    Hi, I'm writing some low level code for X11 platform. To achieve best data copying performance I use XFixes/XDamage extensions. How can I clear the contents of XFixes region after one refresh cycle? Or do they clean themselves after I use XFixesSetPictureClipRegion? My code is something like that: Display xdpy; XShamPixmap pixmap_; XFixesRegion region_; damage_event_callback(damage_geometry_t geometry, XDamage damage,...) { unsigned curr_region = XFixesCreateRegion(xdpy, 0, 0); XDamageSubtract(xdpy, damage, None, curr_region); XFixesTranslateRegion( xdpy, curr_region, geometry.left(), geometry.top() ); XFixesUnionRegion (xdpy, region_, region_, curr_region); } process_damage_events(...) { XFixesSetPictureClipRegion( xdpy, pixmap_, 0, 0, region_); XCopyArea (xdpy, window_->id(), pixmap_, XDefaultGC(xdpy, XDefaultScreen(xdpy)), 0,0,width(),height(),0,0); /*Should clear region_ here */ ... } Currently I clear the region by deleting and recreating, but I guess it's not the best way to do that.

    Read the article

  • Developing Air (Flex) Applications for Android and Desktop

    - by Roaders
    I am an experienced Flex and Air Developer and love Android having owned a G1, a milestone (Droid), a Nexus One, a Galaxy S and now a Nexus S. Understandably I am interested in developing Flex applications for Android. I have just started working through the flex for android in 90 mins tutorial here: http://coenraets.org/flexandroid90/FlexAndroid90Minutes.pdf The very first step says that I have to create a Flex Mobile Project. I was under the impression that the whole point of Air is that the same application could run on many different platforms. I was intending on creating an air app with different skins that could be swapped in and out depending on the platform it was running on. This seems to imply that I will have to compile my Air app once for desktop and once for mobile. This isn't the end of the world but it's not quite how I expected it to work. I suppose that if I am creating mobile specific skins then I may as well create a mobile specific app. Is it possible to create one Air app that will run on both mobile and desktop? Is this a good idea?

    Read the article

  • Get subdomain of XML page from XSL

    - by fudgey
    I am working with a guild hosting site that provides an XML/XSL transformation widget where all I need to do is enter the URL for the XML and XSL and it does the rest. I have written an XSL transform to display a World of Warcraft armory page: Here is an example XML page (view source to see it) of the group I'm trying to help now. So, the user is entering their own XML page url (which has an eu subdomain in this case, but it is not within the XML itself). So when I make links to the character url, I need to add the entire url <a target="_blank" href="http://eu.wowarmory.com/character-sheet.xml?{@url}"> <xsl:value-of select="@name"/> </a> But I can't just set the domain to eu since there are multiple regions. Here are the possibilities: us = www, europe = eu, korea = kr, china = cn and taiwan = tw. Here is a snippet of the XML which shows the url parameters: <character achPoints="4275" classId="3" genderId="0" level="80" name="Virtex" raceId="4" rank="2" url="r=Drek%27Thar&amp;cn=Virtex"/> I guess I could just have the user add a small bit of HTML with their region in another part of their page, something like <div id="region">eu</div>, but I'm trying to make this work without any extra coding on their part. Edit: Ok, my question explicitly stated: How do I get the URL subdomain using XSL?

    Read the article

  • CSS renders Input in firefox mac diffrent then firefox PC. can i detect OS via javascript? or maybe

    - by adardesign
    I have a input[type="text"] the that has padding applied to it behaves differently in firefox PC then on a mac. Any hacks that can target firefox PC? These styles are what its seen on firefox PC firebug .searchContainer input { border-color:#7C7C7C #C3C3C3 #DDDDDD; border-style:solid; border-width:1px; color:#555555; float:left; height:12px; padding:3px; } These styles are what its seen on firefox Mac firebug .searchContainer input { border-color:#7C7C7C #C3C3C3 #DDDDDD; border-style:solid; border-width:1px; color:#555555; float:left; height:12px; padding:3px; } No other styles are applied to these inputs. Here is a snapshot of FF PC http://tinyurl.com/2wdxmq5 Here is a snapshot of FF mac http://tinyurl.com/2u7f2nl any suggestions?

    Read the article

  • How can I create an Image in GDI+ from a Base64-Encoded string in C++?

    - by Schnapple
    I have an application, currently written in C#, which can take a Base64-encoded string and turn it into an Image (a TIFF image in this case), and vice versa. In C# this is actually pretty simple. private byte[] ImageToByteArray(Image img) { MemoryStream ms = new MemoryStream(); img.Save(ms, System.Drawing.Imaging.ImageFormat.Tiff); return ms.ToArray(); } private Image byteArrayToImage(byte[] byteArrayIn) { MemoryStream ms = new MemoryStream(byteArrayIn); BinaryWriter bw = new BinaryWriter(ms); bw.Write(byteArrayIn); Image returnImage = Image.FromStream(ms, true, false); return returnImage; } // Convert Image into string byte[] imagebytes = ImageToByteArray(anImage); string Base64EncodedStringImage = Convert.ToBase64String(imagebytes); // Convert string into Image byte[] imagebytes = Convert.FromBase64String(Base64EncodedStringImage); Image anImage = byteArrayToImage(imagebytes); (and, now that I'm looking at it, could be simplified even further) I now have a business need to do this in C++. I'm using GDI+ to draw the graphics (Windows only so far) and I already have code to decode the string in C++ (to another string). What I'm stumbling on, however, is getting the information into an Image object in GDI+. At this point I figure I need either a) A way of converting that Base64-decoded string into an IStream to feed to the Image object's FromStream function b) A way to convert the Base64-encoded string into an IStream to feed to the Image object's FromStream function (so, different code than I'm currently using) c) Some completely different way I'm not thinking of here. My C++ skills are very rusty and I'm also spoiled by the managed .NET platform, so if I'm attacking this all wrong I'm open to suggestions.

    Read the article

  • jQuery ajax post of jpg image to .net webservice. Image results corrupted

    - by sosergio
    I have a phonegap jquery app that opens the camera and take a picture. I then POST this picture to a .net webservice, wich I've coded. I can't use phonegap FileTransfer because such isn't supported by Bada os, wich is a requirement. I believe I've successfully loaded the image from phonegap FileSystem API, I've attached it into an .ajax type:post, I've even received it from .net side, but when .net save the image into the server, the image results corrupted. It seems to me that two sides of the communication have different data type. Has anyone experience in this? Any help will be appreciated. This is my code: //PHONEGAP CAMERA ACCESS (summed up) navigator.camera.getPicture(onGetPictureSuccess, onGetPictureFail, { quality: 50, destinationType:Camera.DestinationType.FILE_URI }); window.resolveLocalFileSystemURI(imageURI, onResolveFileSystemURISuccess, onResolveFileSystemURIError); fileEntry.file(gotFileSuccess, gotFileError); new FileReader().readAsDataURL(file); //UPLOAD FILE function onDataReadSuccess(evt) { var image_data = evt.target.result; var filename = unique_id(); var filext = "jpg"; $.ajax({ type : 'POST', url : SERVICE_BASE_URL+"/fotos/"+filename+"?ext="+filext, cache: false, timeout: 100000, processData: false, data: image_data, contentType: 'image/jpeg', success : function(data) { console.log("Data Uploaded with success. Message: "+ data); $.mobile.hidePageLoadingMsg(); $.mobile.changePage("ok.html"); } }); } On my .net Web Service this is the method that gets invoked: public string FotoSave(string filename, string extension, Stream fileContent) { string filePath = HttpContext.Current.Server.MapPath("~/foto_data/") + "\\" + filename; FileStream writeStream = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Write); int Length = 256; Byte[] buffer = new Byte[Length]; int bytesRead = readStream.Read(buffer, 0, Length); // write the required bytes while (bytesRead > 0) { writeStream.Write(buffer, 0, bytesRead); bytesRead = readStream.Read(buffer, 0, Length); } readStream.Close(); writeStream.Close(); }

    Read the article

  • How important is the programming language when you choose a new job?

    - by Luhmann
    We are currently hiring at the company where I work, and here the codebase is in VB.Net. We are worried that we miss out on a lot of brilliant programmers, who would never ever consider working with VB.Net. My own background is Java and C#, and I was somewhat sceptical as to whether it would work out with VB, as - to be honest - i didn't care much for VB. After a month or so, I was completely fluent in VB, and a few months later i discovered to my surprise, that I actually like VB. I still code my free time projects in C# and Boo though. So my question is firstly, how important is language for you, when you choose a new programming job? Lets say if its a great company, salary is good, and generally an attractive work-place. Would you say no to the perfect job, if the language wasn't your preferred dialect? VB or C# is one thing, but how about Java or C# etc. Secondly if the best developers won't join your company because of your language or platform, would you consider changing, to get the right people? (This is not a language bashing thread, so please no religious language wars) NB: This is Community Wiki

    Read the article

  • What is the best way to handle the Connections to MySql from c#

    - by srk
    I am working on a c# application which connects to MySql server. There are about 20 functions which will connect to database. This application will be deployed in 200 over machines. I am using the below code to connect to my database which is identical for all the functions. The problem is, i can some connections were not closed and still alive when deployed in 200 over machines. Connection String : <add key="Con_Admin" value="server=test-dbserver; database=test_admindb; uid=admin; password=1Password; Use Procedure Bodies=false;" /> Declaration of the connection string Globally in application [Global.cs] : public static MySqlConnection myConn_Instructor = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); Function to query database : public static DataSet CheckLogin_Instructor(string UserName, string Password) { DataSet dsValue = new DataSet(); //MySqlConnection myConn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); try { string Query = "SELECT accounts.str_nric AS Nric, accounts.str_password AS `Password`," + " FROM accounts " + " WHERE accounts.str_nric = '" + UserName + "' AND accounts.str_password = '" + Password + "\'"; MySqlCommand cmd = new MySqlCommand(Query, Global.myConn_Instructor); MySqlDataAdapter da = new MySqlDataAdapter(); if (Global.myConn_Instructor.State == ConnectionState.Closed) { Global.myConn_Instructor.Open(); } cmd.ExecuteScalar(); da.SelectCommand = cmd; da.Fill(dsValue); Global.myConn_Instructor.Close(); } catch (Exception ex) { Global.myConn_Instructor.Close(); ExceptionHandler.writeToLogFile(System.Environment.NewLine + "Target : " + ex.TargetSite.ToString() + System.Environment.NewLine + "Message : " + ex.Message.ToString() + System.Environment.NewLine + "Stack : " + ex.StackTrace.ToString()); } return dsValue; }

    Read the article

  • Elegantly determine if more than one boolean is "true"

    - by Ola Tuvesson
    I have a set of five boolean values. If more than one of these are true I want to excecute a particular function. What is the most elegant way you can think of that would allow me to check this condition in a single if() statement? Target language is C# but I'm interested in solutions in other languages as well (as long as we're not talking about specific built-in functions). One interesting option is to store the booleans in a byte, do a right shift and compare with the original byte. Something like if(myByte && (myByte 1)) But this would require converting the separate booleans to a byte (via a bitArray?) and that seems a bit (pun intended) clumsy... [edit]Sorry, that should have been if(myByte & (myByte - 1)) [/edit] Note: This is of course very close to the classical "population count", "sideways addition" or "Hamming weight" programming problem - but not quite the same. I don't need to know how many of the bits are set, only if it is more than one. My hope is that there is a much simpler way to accomplish this.

    Read the article

  • How to force a WebPart appears in all pages of a portal in asp.net?

    - by Mehdi
    Hi, I'm working on a portal/CMS project and (unfortunately) build the foundation on WebParts platform. However I need to provide an option for admin to choose whether a webpart should be display in all pages or not. Finally I've found a nice article from Damon Armstrong that describes a way to store all personalization data of a group of pages into one record. Thus every changes the admin made for a webpart, affect whole pages. But it doesn't seems to be a solution for me because of these reasons: 1- The above solution works for a group of pages; in fact we can select which pages to display all webparts, but we expect reverse: select which webpart to display in all pages. 2- After some data entry and adding webparts on pages, we'll face an issue about massive data size of personalization record that should be serialize and deserialize to display contents of each page. May be it would be solved by writing another custom personalization provider or some hacking on webparts system, but I don't now how. Any Ideas about the problem? Thanks

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • [Cocoa] Placing an NSTimer in a separate thread

    - by ndg
    I'm trying to setup an NSTimer in a separate thread so that it continues to fire when users interact with the UI of my application. This seems to work, but Leaks reports a number of issues - and I believe I've narrowed it down to my timer code. Currently what's happening is that updateTimer tries to access an NSArrayController (timersController) which is bound to an NSTableView in my applications interface. From there, I grab the first selected row and alter its timeSpent column. From reading around, I believe what I should be trying to do is execute the updateTimer function on the main thread, rather than in my timers secondary thread. I'm posting here in the hopes that someone with more experience can tell me if that's the only thing I'm doing wrong. Having read Apple's documentation on Threading, I've found it an overwhelmingly large subject area. NSThread *timerThread = [[[NSThread alloc] initWithTarget:self selector:@selector(startTimerThread) object:nil] autorelease]; [timerThread start]; -(void)startTimerThread { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; activeTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(updateTimer:) userInfo:nil repeats:YES] retain]; [runLoop run]; [pool release]; } -(void)updateTimer:(NSTimer *)timer { NSArray *selectedTimers = [timersController selectedObjects]; id selectedTimer = [selectedTimers objectAtIndex:0]; NSNumber *currentTimeSpent = [selectedTimer timeSpent]; [selectedTimer setValue:[NSNumber numberWithInt:[currentTimeSpent intValue]+1] forKey:@"timeSpent"]; } -(void)stopTimer { [activeTimer invalidate]; [activeTimer release]; }

    Read the article

  • Boost's "cstdint" Usage

    - by patt0h
    Boost's C99 stdint implementation is awfully handy. One thing bugs me, though. They dump all of their typedefs into the boost namespace. This leaves me with three choices when using this facility: Use "using namespace boost" Use "using boost::[u]<type><width>_t" Explicitly refer to the target type with the boost:: prefix; e.g., boost::uint32_t foo = 0; Option ? 1 kind of defeats the point of namespaces. Even if used within local scope (e.g., within a function), things like function arguments still have to be prefixed like option ? 3. Option ? 2 is better, but there are a bunch of these types, so it can get noisy. Option ? 3 adds an extreme level of noise; the boost:: prefix is often = to the length of the type in question. My question is: What would be the most elegant way to bring all of these types into the global namespace? Should I just write a wrapper around boost/cstdint.hpp that utilizes option ? 2 and be done with it? Also, wrapping the header like so didn't work on VC++ 10 (problems with standard library headers): namespace Foo { #include <boost/cstdint.hpp> using namespace boost; } using namespace Foo; Even if it did work, I guess it would cause ambiguity problems with the ::boost namespace.

    Read the article

  • What is the best practice for including third party jar files in a Java program?

    - by ZoFreX
    I have a program that needs several third-party libraries, and at the moment it is packaged like so: zerobot.jar (my file) libs/pircbot.jar libs/mysql-connector-java-5.1.10-bin.jar libs/c3p0-0.9.1.2.jar As far as I know the "best" way to handle third-party libs is to put them on the classpath in the manifest of my jar file, which will work cross-platform, won't slow down launch (which bundling them might) and doesn't run into legal issues (which repackaging might). The problem is for users who supply the third party libraries themselves (example use case, upgrading them to fix a bug). Two of the libraries have the version number in the file, which adds hassle. My current solution is that my program has a bootstrapping process which makes a new classloader and instantiates the program proper using it. This custom classloader adds all .jar files in lib/ to its classpath. My current way works fine, but I now have two custom classloaders in my application and a recent change to the code has caused issues that are difficult to debug, so if there is a better way I'd like to remove this complexity. It also seems like over-engineering for what I'm sure is a very common situation. So my question is, how should I be doing this?

    Read the article

  • How can one convince a team to use a new technology (LinQ, MVC, etc )?

    - by Atomiton
    Obviously, it's easier to do with some developers, but I'm sure many of us are on teams that prefer the status quo. You know the type. You see some benefit in a piece of new technology and they prefer the tried and true methods. Try, for example, DBA/C# programmer the advantages of using LinQ ( not necessarily LinQ to SQL, just LinQ in general ). For example, When a project requirement is to be cross-platform... instead of thinking about how one can run Windows on a Mac through a VM Machine, introducing the idea of using relatively new Silverlight or creating it in Java ( as an option to look into ). I know most people don't like to be out of their comfort level, so it takes a bit of convincing, and not ALL new technology makes business sense... but how have you convinced your team to look at a new technology? What technologies have you successfully introduced to your workplace? What technologies do you think are hardest to introduce? ( I'm thinking paradigm-shifting ones, like MVC from WebForms... or new languages ) What strategies do you employ to make these new technologies appealing?

    Read the article

  • Developing on a windows machine that interacts with a linux system

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • Flash compiler error 1061: Call to a possibly undefined method run... but run exists!

    - by Zane Geiger
    So I've been working on making a game in Processing but I think Flash would be a better way to get more people playing it, so I've decided to learn Flash. The problem is that I keep getting really stupid errors on incredibly simple things. For instance, I want to make a 'Block' object to use in a platform game. So I make a new .as file, name it Block.as, and define the Block class within it like so: package { public class Block { public function Block() { // constructor code } public function run() { } } } I don't want to add the code yet, I just want to ensure that this works. So in my main timeline code, I try to create an instance of the Block object and execute its run method: var block1:Block = new Block(); block1.run(); Every time it gives me this inane error: Scene 1, Layer 'Layer 1', Frame 1, Line 2 1061: Call to a possibly undefined method run through a reference with static type Block. What undefined method!? It's defined RIGHT THERE in Block.as. The class file is even in the same folder and everything. I'm getting REALLY annoyed at how poorly Flash handles such a ridiculously simple project. Does anyone know why Flash hates me?

    Read the article

  • How can I get JUnit test (driven from Ant script) to dump the stack of exception that causes failure

    - by Matt Wang
    We run JUnit test from Ant script, as follows. When the test failed, I expect it to output the stack dump of the exception that casuses the failure, but it doesn't. Is there any trick to get it dumped? <target description="Run JUnit tests" name="run-junit" depends="build-junit"> <copy file="./AegisLicense.txt" tofile="test/junit/classes/AegisLicense.txt" overwrite="true"/> <junit printsummary="yes" haltonfailure="no" fork="yes" forkmode="once" failureproperty="run-aegis-junit-failed" showoutput="yes" filtertrace="off"> <classpath refid="Aegisoft.testsupport.classpath"/> <classpath> <pathelement location="test/junit/classes"/> </classpath> <batchtest> <fileset dir="test/junit/src"> <include name="**"/> </fileset> </batchtest> </junit> <fail

    Read the article

  • WPF : Command routing for Keyboard shortcuts.

    - by Sprotty
    Basically I want to create a keyboard shortcut which is valid within the scope of a window, and not just enabled when focus is within the control that binds it. in more detail.... I have a window which has 3 controls a toolbar textbox Custom Control The toolbar has a button bound to the Command CustomCommands.CmdA and linked to 'Ctrl-T'. My Custom Control can process CmdA. When I run the app and click on my custom control CmdA is enabled and works fine. Also Ctrl-T cause the command to fire. However when I select the text box, my custom command CmdA becomes disabled. I can rectify this by setting the command target for CmdA's button. Now when I select the textBox, CmdA is still enabled. But the Keyboard shortcut Ctrl-T does nothing. Is there any easy way to change the scope of keyboard shortcuts? Or do I need to catch the keypress somewhere lower down, and work out which Command it relates to and route it myself (if so is there a framework within which to do this?) Many Thanks Simon

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >