Search Results

Search found 8638 results on 346 pages for 'f vs c'.

Page 319/346 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Is it possible to wrap an asynchronous event and its callback in a function that returns a boolean?

    - by Rob Flaherty
    I'm trying to write a simple test that creates an image element, checks the image attributes, and then returns true/false. The problem is that using the onload event makes the test asynchronous. On it own this isn't a problem (using a callback as I've done in the code below is easy), but what I can't figure out is how to encapsulate this into a single function that returns a boolean. I've tried various combinations of closures, recursion, and self-executing functions but have had no luck. So my question: am I being dense and overlooking something simple, or is this in fact not possible, because, no matter what, I'm still trying to wrap an asynchronous function in synchronous expectations? Here's the code: var supportsImage = function(callback) { var img = new Image(); img.onload = function() { //Check attributes and pass true or false to callback callback(true); }; img.src = 'data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs='; }; supportsImage(function(status){ console.log(status); }); To be clear, what I want is to be able to wrap this in something such that it can be used like: if (supportsImage) { //Do some crazy stuff } Thanks! (Btw, I know there are a ton of SO questions regarding confusion about synchronous vs. asynchronous. Apologies if this can be reduced to something previously answered.)

    Read the article

  • Why is this unordered list formatting differently in IE7?

    - by Joel
    I'm better about getting things to look good in IE8, FF, and Safari, but IE7 still throws curve balls at me... Please check out this page and scroll down below the nav bar: http://rattletree.com/instruments.php It should become obvious when viewing in FF vs IE7. For some reason the formatting of the list is pushing the list items down on the page... any tips? <ul class="instrument"> <li class="imagebox"><img src="/images/stuff.jpg" width="247" height="228" alt="Matepe" /></li> <li class="textbox"><h3>Matepe</h3><p>This text should be to the right of the image but drops below the image in IE7</p></li> </ul> css: ul.instrument { text-align:left; display:inline-block; } ul.instrument li { list-style-type: none; display:inline-block; } li.imagebox { display:inline; margin:20px 0; padding:0px; vertical-align:top; } li.imagebox img{ border: solid black 1px; } li.textbox { display:inline; } li.textbox p{ margin:10px; width:340px; display:inline-block; }

    Read the article

  • Short file names versus long file names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • Compile C++ in Visual Studio

    - by Kasun
    Hi All.. I use this method to compile C++ file in VS. But even i provide the correct file it returns false. Can any one help me... This is class called CL class CL { private const string clexe = @"cl.exe"; private const string exe = "Test.exe", file = "test.cpp"; private string args; public CL(String[] args) { this.args = String.Join(" ", args); this.args += (args.Length > 0 ? " " : "") + "/Fe" + exe + " " + file; } public Boolean Compile(String content, ref string errors) { if (File.Exists(exe)) File.Delete(exe); if (File.Exists(file)) File.Delete(file); File.WriteAllText(file, content); Process proc = new Process(); proc.StartInfo.UseShellExecute = false; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.RedirectStandardError = true; proc.StartInfo.FileName = clexe; proc.StartInfo.Arguments = this.args; proc.StartInfo.CreateNoWindow = true; proc.Start(); //errors += proc.StandardError.ReadToEnd(); errors += proc.StandardOutput.ReadToEnd(); proc.WaitForExit(); bool success = File.Exists(exe); return success; } } This is my button click event private void button1_Click(object sender, EventArgs e) { string content = "#include <stdio.h>\nmain(){\nprintf(\"Hello world\");\n}\n"; string errors = ""; CL k = new CL(new string[] { }); if (k.Compile(content, ref errors)) Console.WriteLine("Success!"); else MessageBox.Show("Errors are : ", errors); }

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • Entity framework with Linq to Entities performance

    - by mare
    If I have a static method like this public static string GetTicClassificationTitle(string classI, string classII, string classIII) { using (TicDatabaseEntities ticdb = new TicDatabaseEntities()) { var result = from classes in ticdb.Classifications where classes.ClassI == classI where classes.ClassII == classII where classes.ClassIII == classIII select classes.Description; return result.FirstOrDefault(); } } and use this method in various places in foreach loops or just plain calling it numerous times, does it create and open new connection every time? If so, how can I tackle this? Should I cache the results somewhere, like in this case, I would cache the entire Classifications table in Memory Cache? And then do queries vs this cached object? Or should I make TicDatabaseEntities variable static and initialize it at class level? Should my class be static if it contains only static methods? Because right now it is not.. Also I've noticed that if I return result.First() instead of FirstOrDefault() and the query does not find a match, it will issue an exception (with FirstOrDefault() there is no exception, it returns null). Thank you for clarification.

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • "Ghost" values in PHP/Smarty.

    - by Kyle Sevenoaks
    I've been working on a site for a while changing the layout and skin of a webshop checkout process. I've noticed that if you go all the way through the process until the last page, then click the link to go back to the view products page, the delivery method price displays underneath the navigation buttons, until you refresh and it goes away again. I've downloaded both sourced from the browser (Chrome, but this bug applies to all browsers) and used a file difference tool to display the differences, the result being only: < error.html vs > normal.html 34c34 < <link href="gzip.php?file=167842c1496093fbcd391b41cf7b03da.css&time=1272272181" rel="Stylesheet" type="text/css"/> --- > <link href="gzip.php?file=167842c1496093fbcd391b41cf7b03da.css&time=1272272348" rel="Stylesheet" type="text/css"/> Which is just the way it zips up the CSS stylesheets. (afaik) Has anyone ever encountered such a problem, or anything similar? Normal: Error: I can't even hazard a guess as to what is causing this, at all. I've searched over Google for anything and come up with nothing. What could be causing this? The site in question is Euroworker.no. HTML @ Pastebin. Smarty snippet: {if !$CANONICAL} {canonical}{self}{/canonical} {/if} <link rel="canonical" href="{$CANONICAL}" /> <!-- Css includes --> {includeCss file="frontend/Frontend.css"} {includeCss file="backend/stat.css"} {if {isRTL}} {includeCss file="frontend/FrontendRTL.css"} {/if} {compiledCss glue=true nameMethod=hash} <!--[if lt IE 8]> <link href="stylesheet/frontend/FrontendIE.css" rel="Stylesheet" type="text/css"/> {if $ieCss} <link href="{$ieCss}" rel="Stylesheet" type="text/css"/> {/if} <![endif]--> Thanks.

    Read the article

  • TFS2010 API - Which server event fires when checkin notes are changed?

    - by user3708981
    I've written a TFS plugin that impliments the ISubscribe interface, and creates an external ticket base off of the contents of a check-in note. What I would like to do, if when I go back through older TFS check-ins in VS and edit a check-in note, the plugin would process that event and create an external ticket retroactively. What event / SubscribedType do I need to subscribe to in order for ProcessEvents to fire? My stubbed out code - using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Common; using Microsoft.TeamFoundation.VersionControl.Client; // From C:\Program Files\Microsoft Team Foundation Server 2010\Tools\ using Microsoft.TeamFoundation.Framework.Server; using Microsoft.TeamFoundation.VersionControl.Server; using Changeset = Microsoft.TeamFoundation.VersionControl.Server.Changeset; public class EmbeddedWorkItemEventHandler : ISubscriber { const string EVENT_NAME = "TicketEvent"; const string APP_LOG = "Application"; public Type[] SubscribedTypes() { return new Type[1] { typeof(CheckinNotification) }; // What else do I need here? } public string Name { get { return EVENT_NAME; } } public SubscriberPriority Priority { get { return SubscriberPriority.Normal; } } public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties) { // Create the event source, if it doesn't exist if (!System.Diagnostics.EventLog.SourceExists(EVENT_NAME)) { System.Diagnostics.EventLog.CreateEventSource(EVENT_NAME, APP_LOG); } statusCode = 0; properties = null; statusMessage = String.Empty; string ErrorLine = ""; try { // Here we'll validate the Ticket name if (notificationType == NotificationType.DecisionPoint && notificationEventArgs is CheckinNotification) { //Check-in blocking logic here. } else if (notificationType == NotificationType.Notification && notificationEventArgs is CheckinNotification) { // Tickets on check-in here. } } Catch { // Error checking } return EventNotificationStatus.ActionPermitted; }

    Read the article

  • Deterministic floating point and .NET

    - by code2code
    How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter. In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET. Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case... Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input. Not an option are: decimals (too slow) fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...) update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option Any ideas?

    Read the article

  • How to access web.config connection string in C#?

    - by salvationishere
    I have a 32-bit XP running VS 2008 and I am trying to decrypt my connection string from my web.config file in my C# ASPX file. Even though there are no errors returned, my current connection string doesn't display contents of my selected AdventureWorks stored procedure. I entered it: C:\Program Files\Microsoft Visual Studio 9.0\VC>Aspnet_regiis.exe -pe "connectionStrings" -app "/AddFileToSQL2" Then it said "Succeeded". And my web.config section looks like: <connectionStrings> <add name="Master" connectionString="server=MSSQLSERVER;database=Master; Integrated Security=SSPI" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString2" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Persist Security Info=true; " providerName="System.Data.SqlClient" /> </connectionStrings> And my C# code behind looks like: string connString = ConfigurationManager.ConnectionStrings["AdventureWorksConnectionString2"].ConnectionString; Is there something wrong with the connection string in the web.config or C# code behind file?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • Why doesn't line-height work on FF/3.5.8(Mac)?

    - by Znarkus
    I can't get line-height on a text input to work on Firefox 3.5.8/(Mac). Works flawlessly on: IE6 IE7 IE8 FF3.6/PC FF3.6/Mac Safari Test code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>asd</title> <link rel="stylesheet" type="text/css" href="http://yui.yahooapis.com/2.8.0r4/build/reset/reset-min.css" /> </head> <body> <input type="text" value="Hello" style="line-height:50px;height:50px;font-size:16px;" /> <input type="text" value="Hello" style="padding:17px 0;font-size:16px;" /> </body> </html> Is there an alternate solution or any idea how to fix this? Edit: Updated the test code, to compare line-height vs. padding technique. Padding works on all above browsers except IE8. Whaat? I can't test on FF/3.5.8 anymore, could someone please report the result from this browser on any plattform? I'm now thinking this is a Firefox 3.5.8 issue, plattform independent.

    Read the article

  • Noise with multi-threaded raytracer

    - by herber88
    This is my first multi-threaded implementation, so it's probably a beginners mistake. The threads handle the rendering of every second row of pixels (so all rendering is handled within each thread). The problem persists if the threads render the upper and lower parts of the screen respectively. Both threads read from the same variables, can this cause any problems? From what I've understood only writing can cause concurrency problems... Can calling the same functions cause any concurrency problems? And again, from what I've understood this shouldn't be a problem... The only time both threads write to the same variable is when saving the calculated pixel color. This is stored in an array, but they never write to the same indices in that array. Can this cause a problem? Multi-threaded rendered image (Spam prevention stops me from posting images directly..) Ps. I use the exactly same implementation in both cases, the ONLY difference is a single vs. two threads created for the rendering.

    Read the article

  • INotifyPropertyChanged Setter Style

    - by Ivovic
    In order to reflect changes in your data to the UI you have to implement INotifyPropertyChanged, okay. If I look at examples, articles, tutorials etc most of the time the setters look like something in that manner: public string MyProperty { //get [...] set { if (_correspondingField == value) { return; } _correspondingField = value; OnPropertyChanged("MyProperty"); } } No problem so far, only raise the event if you have to, cool. But you could rewrite this code to this: public string MyProperty { //get [...] set { if (_correspondingField != value) { _correspondingField = value; OnPropertyChanged("MyProperty"); } } } It should do the same (?), you only have one place of return, it is less code, it is less boring code and it is more to the point ("only act if necessary" vs "if not necessary do nothing, the other way round act"). So if the second version has its pros compared to the first one, why I see this style rarely? I don't consider myself being smarter than those people that write frameworks, published articles etc, therefore the second version has to have drawbacks. Or is it wrong? Or do I think too much? Thanks in advance

    Read the article

  • In the meantime, to be or not to be ... productive!

    - by Jan Kuboschek
    I just moved back to Europe from the US after living there for 7 years. Apart from major adjustment issues, I'm currently looking for a job over here. I'm mainly interested in (IT) Consulting and, since these jobs typically require programming knowledge, such as Java, I'm trying to think of something productive to write (perhaps to demo my skills) while I'm waiting for my interviews (starting in two weeks. Folks here are a bit slower than in the US apparently...). I graduated from college about a year and a half ago and have a 4 year degree in international management/economics and about 3/4 of a 2 year degree in computer science finished. I've written my fair share of web software over the years, but nothing concrete that I could show, especially not in Java. Now, I've never had the problem of not having any idea what to write. Basic games I could write, but I'm not sure how well that'll come over when I walk into my interview and say "hey, I was bored. Take a look at my multiplayer space invaders game! Wanna try beating me??". Any thoughts? I browsed SourceForge the other day to find a nice little project to contribute to, but decided that I don't want to commit to someone else's project at this time. Any ideas, perhaps from someone who has been, or currently is, in a similar position would be much appreciated. Oh, and lastly: Instead of developing a program or two to demonstrate my skills, I could spend my time brushing up on UML and Perl. Any suggestions regarding that? Writing a demo vs. learning something new?

    Read the article

  • Managed WMI Event class is not an event class???

    - by galets
    I am using directions from here: http://msdn.microsoft.com/en-us/library/ms257351(VS.80).aspx to create a managed event class. Here's the code that I wrote: [ManagementEntity] [InstrumentationClass(InstrumentationType.Event)] public class MyEvent { [ManagementKey] public string ID { get; set; } [ManagementEnumerator] static public IEnumerable<MyEvent> EnumerateInstances() { var e = new MyEvent() { ID = "9A3C1B7E-8F3E-4C54-8030-B0169DE922C6" }; return new MyEvent[] { e }; } } class Program { static void Main(string[] args) { var thisAssembly = typeof(Program).Assembly; var wmi_installer = new AssemblyInstaller(thisAssembly, null); wmi_installer.Install(null); wmi_installer.Commit(null); InstrumentationManager.RegisterAssembly(thisAssembly); Console.Write("Press Enter..."); Console.ReadLine(); var e = new MyEvent() { ID = "A6144A9E-0667-415B-9903-220652AB7334" }; Instrumentation.Fire(e); Console.Write("Press Enter..."); Console.ReadLine(); wmi_installer.Uninstall(null); } } I can run a program, and it properly installs. Using wbemtest.exe I can browse to the event, and "show mof": [dynamic: ToInstance, provider("WmiTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null")] class MyEvent { [read, key] string ID; }; Notice, the class does not inherit from __ExtrinsicEvent, which is weird... I can also run select * from MyEvent, and get the result. Instrumentation.Fire() also returns no error. However, when I'm trying to subscribe to event using "Notification Query" option, I'm getting 0x80041059 Number: 0x80041059 Facility: WMI Description: Class is not an event class. What am I doing wrong, and is there a correct way to create managed WMI event?

    Read the article

  • Windows Workflow Foundation 4.0 Designer Rehosting with Custom Activities

    - by Robert
    I have several WF 4.0 workflows that I have created for an application my company is developing. Some of these workflows are simple, and some are very complex (i.e. many steps, several different types of activities, custom activities). For many of these workflows, I have created several custom code activities to support some internal process types. The workflows work great and we have had very few problems when it comes to maintaining them within VS 2010. We now want to move that responsibility off to our business users, so I have created a WPF application to rehost the WF designer (according to the MS samples). My problem is that when I open one of the workflows that contains custom code activities, those activities are represented as red boxes with the error message of "Activity could not be loaded because of errors in XAML." I have done research and have found several posts that mention that this is usually a problem with namespacing and referencing. The rehosted designer is in a namespace similar to this: Company.Application.Workflow.Designer And the custom code activities are contained within a separate custom workflow library, which I have included as a reference in the designer project. The library's namespace is similar to this: Company.Application.Workflow.Data.Activities As I have mentioned, the library is set as a reference in the designer's project, and I see it being copied to the output when I build the project. I have also included the reference in the XAML of the main designed application. What am I missing?

    Read the article

  • Conditionally colour data points outside of confidence bands in R

    - by D W
    I need to colour datapoints that are outside of the the confidence bands on the plot below differently from those within the bands. Should I add a separate column to my dataset to record whether the data points are within the confidence bands? Can you provide an example please? Example dataset: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## For convenience, the data may be formatted into a dataframe severity <- as.data.frame(cbind(diseasesev,temperature)) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) jpeg('~/Desktop/test1.jpg') # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16, pty="s", xlim=c(0,30), ylim=c(0,30) ) title(main="Graph of % Disease Severity vs Temperature") par(new=TRUE) # don't start a new plot ## Get datapoints predicted by best fit line and confidence bands ## at every 0.01 interval xRange=data.frame(temperature=seq(min(temperature),max(temperature),0.01)) pred4plot <- predict( lm(diseasesev~temperature), xRange, level=0.95, interval="confidence" ) ## Plot lines derrived from best fit line and confidence band datapoints matplot( xRange, pred4plot, lty=c(1,2,2), #vector of line types and widths type="l", #type of plot for each column of y xlim=c(0,30), ylim=c(0,30), xlab="", ylab="" )

    Read the article

  • Should .net comments start with a capital letter and end with a period?

    - by Hamish Grubijan
    Depending on the feedback I get, I might raise this "standard" with my colleagues. This might become a custom StyleCop rule. is there one written already? So, Stylecop already dictates this for summary, param, and return documentation tags. Do you think it makes sense to demand the same from comments? On related note: if a comment is already long, then should it be written as a proper sentence? For example (perhaps I tried too hard to illustrate a bad comment): //if exception quit vs. // If an exception occurred, then quit. If figured - most of the time, if one bothers to write a comment, then it might as well be informative. Consider these two samples: //if exception quit if (exc != null) { Application.Exit(-1); } and // If an exception occurred, then quit. if (exc != null) { Application.Exit(-1); } Arguably, one does not need a comment at all, but since one is provided, I would think that the second one is better. Please back up your opinion. Do you have a good reference for the art of commenting, particularly if it relates to .Net? Thanks.

    Read the article

  • Compiling my Boost/NTL program with c++ on Linux.

    - by Martin Lauridsen
    Hi SO, I wrote a client program and a server program, that uses the NTL library and Boost::Asio, to do client/server communication for an integer factorization application, in C++. Both sides consist of several headers and cpp files. Both project compile fine individually on Windows in Visual Studio. All I did, was add the include path of NTL and Boost to both projects: Additional include paths: "D:\Downloads\WinNTL-5_5_2\include";D:\boost_1_42_0 Furthermore, for both projects, I added the two library paths to both projects in VS: Additional library directories: D:\boost_1_42_0\stage\lib;"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" And added under Additional dependencies: ntl.lib As said, it compiles fine on Windows. But when I put the code on a Linux machine provided by university, I try to compile with the following statement c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include client_protocol.cpp mpqs_client.cpp mpqs_sieve.cpp mpqs_helper.cpp -o mpqs_helper -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -static Upon doing this, I get a huuuge error, which I posted here. Any idea how to fix this, please??

    Read the article

  • When to use reinterpret_cast?

    - by HeretoLearn
    I am little confused with the applicability of reinterpret_cast vs static_cast. From what I have read the general rules are to use static cast when the types can be interpreted at compile time hence the word static. This is the cast the C++ compiler uses internally for implicit casts also. reinterpret_cast are applicable in two scenarios, convert integer types to pointer types and vice versa or to convert one pointer type to another. The general idea I get is this is unportable and should be avoided. Where I am a little confused is one usage which I need, I am calling C++ from C and the C code needs to hold on to the C++ object so basically it holds a void*. What cast should be used to convert between the void * and the Class type? I have seen usage of both static_cast and reinterpret_cast? Though from what I have been reading it appears static is better as the cast can happen at compile time? Though it says to use reinterpret_cast to convert from one pointer type to another?

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >