Search Results

Search found 18249 results on 730 pages for 'real world haskell'.

Page 601/730 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Legal issues in Europe: check patents ?

    - by Bugz R us
    We live in Europe and are releasing commercial software in multiple countries. Besides of the licensing issues (GPL/LGPL/...) we have a question about patents. I know that if you're in the US, before you release software, you have to check if there aren't any patents you're infringing upon. I also know most of these patents or usually irrational and form a heavy burden on developers/software engineers. Now, as far as I know, EU rules are lots more ratinal, but there has been lobbying to also apply the same rules in EU. http://www.nosoftwarepatents.com http://www.stopsoftwarepatents.eu So what's the deal actually ? For example, there's mention of a patent on a shopping cart : http://v3.espacenet.com/publicationDetails/biblio?CC=EP&NR=807891&KC=&FT=E Is that true ? Is a "shopping cart" patented ??? .................. http://stackoverflow.com/questions/1396191/what-should-every-developer-know-about-legal-matters : 4.Software patent lawsuits are crap shoots. You should not, of course, knowingly violate a software patent. However, there is a small but real chance some company will sue you for violating their patent. This may happen even if you develop your software independently, you never heard of the patent, and the patent covers a technique that is intuitively obvious and almost completely unrelated to your software. There is not a lot you can do to avoid this, given the current USPTO policies, other than buy insurance. The good news is that patent trolls generally sue large companies with lots of money.

    Read the article

  • Webcam video stream processing.

    - by vikramtheone
    Hi Guys, I'm working with an image processing project, my final goal is to detect features on a real time video and finally track those features. I will be working with an Embedded Processor Platform called Freescale's i.MX515, it is a 32-bit media processor running on Ubuntu 9.04. Right now I'm working on the algorithms to locate the features, so, I'm using still images. When I'm satisfied with the results I will have to start using a video stream and I don't want to make use of a video file as a source stream, because then I will have to worry about video decoders then. Instead I would like to plug in a USB Wecam to the embedded platform (It has USB ports on it), directly take the frames as they are captured and send it to my application. I will take care to buy a webcam which will be supported in Linux (Device driver). But my question is will I be able to capture the incoming video stream from the webcam and send it to my application? Will I be able to configure the webcam and DMA to write the incoming frames in a particular memory location whose pointer I can simply pass to my application? (Confused!!!) I hope I could convey my doubts, can anyone guide me with what steps that I have to take to achieve all of these easily? Do you foresee any impossibility here? Help!!! Regards Vikram

    Read the article

  • JUnit confusion: use 'extend Testcase' or '@Test' ?

    - by Rabarberski
    I've found the proper use (or at least the documentation) of JUnit very confusing. This question serves both as a future reference and as a real question. If I've understood correctly, there are two main approaches to create and run a JUnit test: Approach A: create a class that extends TestCase, and start test methods with the word test. When running the class as a JUnit Test (in Eclipse), all methods starting with the word test are automatically run. import junit.framework.TestCase; public class DummyTestA extends TestCase { public void testSum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Approach B: create a 'normal' class and prepend a @Test annotation to the method. Note that you do NOT have to start the method with the word test. import org.junit.*; import static org.junit.Assert.*; public class DummyTestB { @Test public void Sum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Mixing the two seems not to be a good idea, see e.g. this stackoverflow question: Now, my questions(s): What is the preferred approach, or when would you use one instead of the other? Approach B allows for testing for exceptions by extending the @Test annotation like in @Test(expected = ArithmeticException.class). But how do you test for exceptions when using approach A? When using approach A, you can group a number of test classes in a test suite. TestSuite suite = new TestSuite("All tests");<br/> suite.addTestSuite(DummyTestA.class); suite.addTestSuite(DummyTestAbis.class);` But this can't be used with approach B (since each testclass should subclass TestCase). What is the proper way to group tests for approach B?

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Android ADB has moved and Eclipse is looking in the old place

    - by Peter Nelson
    I did an SDK update last night and it moved adb.exe. In its place it left a file called "adb_has_moved.txt" saying The adb tool has moved to platform-tools/ If you don't see this directory in your SDK, launch the SDK and AVD Manager (execute the android tool) and install "Android SDK Platform-tools" Please also update your PATH environment variable to include the platform-tools/ directory, so you can execute adb from any location. So I did all that, including the PATH and now I can start adb.exe from any DOS prompt. But I still can't start it from Eclipse (Galileo 3.52). When I try it says Location of the Android SDK has not been set up in the preferences ... which is not true. The SDK IS set up in Preferences. The real problem is at the top of the Preferences window where it says "Could not find C:\SDK\android-sdk-windows\ tools \adb.exe!" ...No kidding - the update moved it to C:\SDK\android-sdk-windows\ platform-tools. Because it's specifying a specific (wrong) path Eclipse is bypassing the PATH variable. So how do I get Eclipse to look in the right place?

    Read the article

  • What is a good way to assign order #s to ordered rows in a table in Sybase

    - by DVK
    I have a table T (structure below) which initially contains all-NULL values in an integer order column: col1 varchar(30), col2 varchar(30), order int NULL I also have a way to order the "colN" columns, e.g. SELECT * FROM T ORDER BY some_expression_involving_col1_and_col2 What's the best way to assign - IN SQL - numeric order values 1-N to the order table, so that the order values match the order of rows returned by the above ORDER BY? In other words, I would like a single query (Sybase SQL syntax so no Oracle's rowcount) which assigns order values so that SELECT * FROM T ORDER BY order returns 100% same order of rows as the query above. The query does NOT necessarily need to update the table T in place, I'm OK with creating a copy of the table T2 if that'll make the query simpler. NOTE1: A solution must be real query or a set of queries, not involving a loop or a cursor. NOTE2: Assume that the data is uniquely orderable according to the order by above - no need to worry about situation when 2 rows can be assigned the same order at random. NOTE3: I would prefer a generic solution, but if you wish a specific example of ordering expression, let's say: SELECT * FROM T ORDER BY CASE WHEN col1="" THEN "AAAAAA" ELSE col1 END, ISNULL(col2, "ZZZ")

    Read the article

  • Linux: modpost does not build anything

    - by waffleman
    I am having problems getting any kernel modules to build on my machine. Whenever I build a module, modpost always says there are zero modules: MODPOST 0 modules To troubleshoot the problem, I wrote a test module (hello.c): #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_INFO */ #include <linux/init.h> /* Needed for the macros */ static int __init hello_start(void) { printk(KERN_INFO "Loading hello module...\n"); printk(KERN_INFO "Hello world\n"); return 0; } static void __exit hello_end(void) { printk(KERN_INFO "Goodbye Mr.\n"); } module_init(hello_start); module_exit(hello_end); Here is the Makefile for the module: obj-m = hello.o KVERSION = $(shell uname -r) all: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) modules clean: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) clean When I build it on my machine, I get the following output: make -C /lib/modules/2.6.32-27-generic/build M=/home/waffleman/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.32-27-generic' CC [M] /home/waffleman/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-27-generic' When I make the module on another machine, it is successful: make -C /lib/modules/2.6.24-27-generic/build M=/home/somedude/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.24-27-generic' CC [M] /home/somedude/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 1 modules CC /home/somedude/tmp/mod-test/hello.mod.o LD [M] /home/somedude/tmp/mod-test/hello.ko make[1]: Leaving directory `/usr/src/linux-headers-2.6.24-27-generic' I looked for any relevant documentation about modpost, but found little. Anyone know how modpost decides what to build? Is there an environment that I am possibly missing? BTW here is what I am running: uname -a Linux waffleman-desktop 2.6.32-27-generic #49-Ubuntu SMP Wed Dec 1 23:52:12 UTC 2010 i686 GNU/Linux

    Read the article

  • Swimlane Diagram Softwares with Expand/Collapse Features

    - by louis xie
    I've been searching real hard for a software which can fulfill my needs, but to no avail. I have a swimlane diagram which is extremely huge, and almost impossible to model using Visio or any traditional swimlane software. I would need to model both the operational process, as well as the interactions within an application and between different applications. Therefore, without wasting additional effort modelling these separately, I am looking for a solution which I can combine both views together. That is, possibly one which I can expand/collapse/group/ungroup processes/subprocesses together. Take a typical credit card process for instance, a hypothetical description of the swimlane could be as such: Customer submits application form to the bank Bank Officer A receives the application form and validates that it was correctly filled Bank Officer A submits application form to Bank Officer B for processing. Bank Officer B checks credit quality of the customer through Application X. Application X submits query to Application Y to retrieve Credit Report. Application X retrieves credit report and submits to Application Z for computation of credit scores Bank Officer B validates that customer is credit worthy, and submits application to Bank Officer C for processing. The above is an over-simplified credit card request process, and a purely hypothetical one. What I'm trying to drive at is, each of the above processes have sub-processes, and I want to be able to switch between a "detailed" view and "aggregated" view. If possible, add in time dependency of the different tasks, as well. I haven't been able to find one such software which could do this.

    Read the article

  • How to simulate OutOfMemory exception

    - by Gacek
    I need to refactor my project in order to make it immune to OutOfMemory exception. There are huge collections used in my project and by changing one parameter I can make my program to be more accurate or use less of the memory... OK, that's the background. What I would like to do is to run the routines in a loop: Run the subroutines with the default parameter. Catch the OutOfMemory exception, change the parameter and try to run it again. Do the 2nd point until parameters allow to run the subroutines without throwing the exception (usually, there will be only one change needed). Now, I would like to test it. I know, that I can throw the OutOfMemory exception on my own, but I would like to simulate some real situation. So the main question is: Is there a way of setting some kind of memory limit for my program, after reaching which the OutOfMemory exception will be thrown automatically? For example, I would like to set a limit, let's say 400MB of memory for my whole program to simulate the situation when there is such an amount of memory available in the system. Can it be done?

    Read the article

  • How to make safe frequent DataSource switches for AbstractRoutingDataSource?

    - by serg555
    I implemented Dynamic DataSource Routing for Spring+Hibernate according to this article. I have several databases with same structure and I need to select which db will run each specific query. Everything works fine on localhost, but I am worrying about how this will hold up in real web site environment. They are using some static context holder to determine which datasource to use: public class CustomerContextHolder { private static final ThreadLocal<CustomerType> contextHolder = new ThreadLocal<CustomerType>(); public static void setCustomerType(CustomerType customerType) { Assert.notNull(customerType, "customerType cannot be null"); contextHolder.set(customerType); } public static CustomerType getCustomerType() { return (CustomerType) contextHolder.get(); } public static void clearCustomerType() { contextHolder.remove(); } } It is wrapped inside some ThreadLocal container, but what exactly does that mean? What will happen when two web requests call this piece of code in parallel: CustomerContextHolder.setCustomerType(CustomerType.GOLD); //<another user will switch customer type here to CustomerType.SILVER in another request> List<Item> goldItems = catalog.getItems(); Is every web request wrapped into its own thread in Spring MVC? Will CustomerContextHolder.setCustomerType() changes be visible to other web users? My controllers have synchronizeOnSession=true. How to make sure that nobody else will switch datasource until I run required query for current user? Thanks.

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Trouble when changing pixel data with alpha on png on iphone --okay on simulator

    - by Ted
    I'm trying to change the color of the pixels (lighten or darken) without changing the value of the alpha channel using CGDataProviderCopyData. I leave every 4th databyte untouched. It work fine of the iphone simulator, however on the real thing the alpha goes white as I increase the values of the other pixels. I've tried changing just the first byte, or the second, or the third. Does anybody have any idea what is going on? The basic code is borrowed from Jorge. I like this simple approach --I'm new to this. But I want to make it work with png images with some transparency. here is most of the code by Jorge : CFDataRef CopyImagePixels(CGImageRef inImage){ return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); } CGImageRef img=originalImage.CGImage; CFDataRef dataref=CopyImagePixels(img); UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref); int length=CFDataGetLength(dataref); for(int index=0;index255){ data[index+i]=255; }else{ data[index+i]+=value; } } } } size_t width=CGImageGetWidth(img); size_t height=CGImageGetHeight(img); size_t bitsPerComponent=CGImageGetBitsPerComponent(img); size_t bitsPerPixel=CGImageGetBitsPerPixel(img); size_t bytesPerRow=CGImageGetBytesPerRow(img); CGColorSpaceRef colorspace=CGImageGetColorSpace(img); CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img); CGImageAlphaInfo alphaInfo = kCGBitmapAlphaInfoMask(img); NSLog(@"bitmapinfo: %d",bitmapInfo); CFDataRef newData=CFDataCreate(NULL,data,length); CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData); CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault); [iv setImage:[UIImage imageWithCGImage:newImg]]; CGImageRelease(newImg); CGDataProviderRelease(provider);

    Read the article

  • Case Insensitive Ternary Search Tree

    - by Yan Cheng CHEOK
    I had been using Ternary Search Tree for a while, as the data structure to implement a auto complete drop down combo box. Which means, when user type "fo", the drop down combo box will display foo food football The problem is, my current used of Ternary Search Tree is case sensitive. My implementation is as follow. It had been used by real world for around 1++ yeas. Hence, I consider it as quite reliable. My Ternary Search Tree code However, I am looking for a case insensitive Ternary Search Tree, which means, when I type "fo", the drop down combo box will show me foO Food fooTBall Here are some key interface for TST, where I hope the new case insentive TST may have similar interface too. /** * Stores value in the TernarySearchTree. The value may be retrieved using key. * @param key A string that indexes the object to be stored. * @param value The object to be stored in the tree. */ public void put(String key, E value) { getOrCreateNode(key).data = value; } /** * Retrieve the object indexed by key. * @param key A String index. * @return Object The object retrieved from the TernarySearchTree. */ public E get(String key) { TSTNode<E> node = getNode(key); if(node==null) return null; return node.data; } An example of usage is as follow. TSTSearchEngine is using TernarySearchTree as the core backbone. Example usage of Ternary Search Tree // There is stock named microsoft and MICROChip inside stocks ArrayList. TSTSearchEngine<Stock> engine = TSTSearchEngine<Stock>(stocks); // I wish it would return microsoft and MICROCHIP. Currently, it just return microsoft. List<Stock> results = engine.searchAll("micro");

    Read the article

  • Questions on method to use: Facebook Business Page with Flash or Facebook Application?

    - by Jay
    Hi there, I'm new to Facebook's Graph API / FBML / etc. So if at any point in my post I make a mistake or wrong assumption, please point them out. One of the projects that I am working on has a need to get data/info from an existing web site in addition to the friends list and such that FB can provide. This is for a platform that allows people from all over the world to help out in small scale projects that will benefit various communities. These projects are usually to help out people in unfortunate / less than ideal situations / living conditions so that their environment improves. You can read about it in detail at http://www.getitdone.org. The initial idea was to develop a Business Page (this is the same as a Fan Page right?) for each Project with a Static FBML tag to do the displaying. However, iframes are not allowed in Pages (as far as I know, iframes are only allowed in an FB Application). So it is no longer possible to get data from the web site to be displayed in the FB Page. So one of the options is to embed a Flash in the Page's Tab. I am fairly certain that the Flash can retrieve data on the User's friends and connections in an Application's context (because of all of those darn FB games that I'm addicted to :p). However, I would like to confirm if it can do the same if it's embedded in a Business Page's Tab. Could someone please confirm on this issue? The other option that we arrived at was (if the earlier options fails) to have an Application built instead. However, this means that we would need to create an Application for each Project. Not an ideal situation. Is there any other option that we have missed that can help us achieve the desired result with as little hassle as possible? Any help that you can provide on this matter is most welcome. Thank you.

    Read the article

  • Dynamic Control loading at wrong time?

    - by Telos
    This one is a little... odd. Basically I have a form I'm building using ASP.NET Dynamic Data, which is going to utilize several custom field templates. I've just added another field to the FormView, with it's own custom template, and the form is loading that control twice for no apparent reason. Worse yet, the first time it loads the template, the Row is not ready yet and I get the error message: {"Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control."} I'm accessing the Row variable in a LinqDataSource OnSelected event in order to get the child object... Now for the wierd part: If I reorder the fields a little, the one causing the problem no longer gets loaded twice. Any thoughts? EDIT: I've noticed that Page_Load gets called on the first load (when Row throws an exception if you try to use it) but does NOT get called the second time around. If that helps any... Right now managing it by just catching and ignoring the exception, but still a little worried that things will break if I don't find the real cause. EDIT 2: I've traced the problem to using FindControl recursively to find other controls on the page. Apparently FindControl can cause the page lifecycle events (at least up to page_load) to fire... and this occurs before that page "should" be loading so it's dynamic data "stuff" isn't ready yet.

    Read the article

  • How should I launch a Portable Python Tkinter application on Windows without ugliness?

    - by Andrew
    I've written a simple GUI program in python using Tkinter. Let's call this program 'gui.py'. My users run 'gui.py' on Windows machines from a USB key using Portable Python; installing anything on the host machine is undesirable. I'd like my users to run 'gui.py' by double-clicking an icon at the root of the USB key. My users don't care what python is, and they don't want to use a command prompt if they don't have to. I don't want them to have to care what drive letter the USB key is assigned. I'd like this to work on XP, Vista, and 7. My first ugly solution was to create a shortcut in the root directory of the USB key, and set the "Target" property of the shortcut to something like "(root)\App\pythonw.exe (root)\App\gui.py", but I couldn't figure out how to do a relative path in a windows shortcut, and using an absolute path like "E:" seems fragile. My next solution was to create a .bat script in the root directory of the USB key, something like this: @echo off set basepath=%~dp0 "%basepath%App\pythonw.exe" "%basepath%\App\gui.py" This doesn't seem to care what drive letter the USB key is assigned, but it does leave a DOS window open while my program runs. Functional, but ugly. Next I tried a .bat script like this: @echo off set basepath=%~dp0 start "" "%basepath%App\pythonw.exe" "%basepath%\App\gui.py" (See here for an explanation of the funny quoting) Now, the DOS window briefly flashes on screen before my GUI opens. Less ugly! Still ugly. How do real men deal with this problem? What's the least ugly way to start a python Tkinter GUI on a Windows machine from a USB stick?

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • Type result with Ternary operator in C#

    - by Vaccano
    I am trying to use the ternary operator, but I am getting hung up on the type it thinks the result should be. Below is an example that I have contrived to show the issue I am having: class Program { public static void OutputDateTime(DateTime? datetime) { Console.WriteLine(datetime); } public static bool IsDateTimeHappy(DateTime datetime) { if (DateTime.Compare(datetime, DateTime.Parse("1/1")) == 0) return true; return false; } static void Main(string[] args) { DateTime myDateTime = DateTime.Now; OutputDateTime(IsDateTimeHappy(myDateTime) ? null : myDateTime); Console.ReadLine(); ^ } | } | // This line has the compile issue ---------------+ On the line indicated above, I get the following compile error: Type of conditional expression cannot be determined because there is no implicit conversion between '< null ' and 'System.DateTime' I am confused because the parameter is a nullable type (DateTime?). Why does it need to convert at all? If it is null then use that, if it is a date time then use that. I was under the impression that: condition ? first_expression : second_expression; was the same as: if (condition) first_expression; else second_expression; Clearly this is not the case. What is the reasoning behind this? (NOTE: I know that if I make "myDateTime" a nullable DateTime then it will work. But why does it need it? As I stated earlier this is a contrived example. In my real example "myDateTime" is a data mapped value that cannot be made nullable.)

    Read the article

  • Implementing Excel 2003 COM Add-in UDF in Asyc Programming model using C#(VS 2005)

    - by Venu
    Hi: I am trying to implement a UDF using Excel COM Add-in(2003) with Visual Studio 2005 in C#. I would like to implement the UDF using async programming. The UDF is a slow operation as its results are fetched from a server. As an illustration(not a real world example),the following UDF works fine without any issue: public double mul(double number1, double number2) { return number1 * number2; } How can I do the same functionality in an async way: For example, I would like the UDF return immediately and later when the results are available from a server, I would like to update the desired cells. // This method returns immediately. public object mul(double number1, double number2) { return "calculating"; } // This method of a worker thread will update the results. public OnResultsAvailable(object result) { // Question: how should I update the cells that triggerred the calcualtions above? } Constraints: I cannot use Excel RTD as I have to work with existing codebase written using Excel C# COM Add-in. Thanks for the help. -Venu

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • Linq is returning too many results when joined

    - by KallDrexx
    In my schema I have two database tables. relationships and relationship_memberships. I am attempting to retrieve all the entries from the relationship table that have a specific member in it, thus having to join it with the relationship_memberships table. I have the following method in my business object: public IList<DBMappings.relationships> GetRelationshipsByObjectId(int objId) { var results = from r in _context.Repository<DBMappings.relationships>() join m in _context.Repository<DBMappings.relationship_memberships>() on r.rel_id equals m.rel_id where m.obj_id == objId select r; return results.ToList<DBMappings.relationships>(); } _Context is my generic repository using code based on the code outlined here. The problem is I have 3 records in the relationships table, and 3 records in the memberships table, each membership tied to a different relationship. 2 membership records have an obj_id value of 2 and the other is 3. I am trying to retrieve a list of all relationships related to object #2. When this linq runs, _context.Repository<DBMappings.relationships>() returns the correct 3 records and _context.Repository<DBMappings.relationship_memberships>() returns 3 records. However, when the results.ToList() executes, the resulting list has 2 issues: 1) The resulting list contains 6 records, all of type DBMappings.relationships(). Upon further inspection there are 2 for each real relationship record, both are an exact copy of each other. 2) All relationships are returned, even if m.obj_id == 3, even though objId variable is correctly passed in as 2. Can anyone see what's going on because I've spent 2 days looking at this code and I am unable to understand what is wrong. I have joins in other linq queries that seem to be working great, and my unit tests show that they are still working, so I must be doing something wrong with this. It seems like I need an extra pair of eyes on this one :)

    Read the article

  • Select all points in a matrix within 30m of another point

    - by pinnacler
    So if you look at my other posts, it's no surprise I'm building a robot that can collect data in a forest, and stick it on a map. We have algorithms that can detect tree centers and trunk diameters and can stick them on a cartesian XY plane. We're planning to use certain 'key' trees as natural landmarks for localizing the robot, using triangulation and trilateration among other methods, but programming this and keeping data straight and efficient is getting difficult using just Matlab. Is there a technique for sub-setting an array or matrix of points? Say I have 1000 trees stored over 1km (1000m), is there a way to say, select only points within 30m radius of my current location and work only with those? I would just use a GIS, but I'm doing this in Matlab and I'm unaware of any GIS plugins for Matlab. I forgot to mention, this code is going online, meaning it's going on a robot for real-time execution. I don't know if, as the map grows to several miles, using a different data structure will help or if calculating every distance to a random point is what a spatial database is going to do anyway. I'm thinking of mirroring two arrays, one sorted by X and the other by Y. Then bubble sorting to determine the 30m range in that. I do the same for both arrays, X and Y, and then have a third cross link table that will select the individual values. But I don't know, what that's called, how to program that and I'm sure someone already has so I don't want to reinvent the wheel. Cartesian Plane GIS

    Read the article

  • Use different configurations with Simple Injection

    - by Ruben.Canton
    I'm using the library "Simple Injector" (http://simpleinjector.codeplex.com) and it looks cool and nice. But after building a configuration and use it, now I want to know how to change from one configuration to another. Scenario: Let's imagine I've set up a configuration in the Global Asax and I have the public and global Container there. Now I want to make some tests and I want them to use mock classes so I want to change the configuration. I can, of course, build another configuration and assign it to the global Container created by default, so that every time I run a test the alternative configuration will be set. But on doing that and though I'm in development context the Container is changed for everyone, even for normal requests. I know I'm testing in this context and that shouldn't matter, but I have the feeling that this is not the way for doing this... and I wonder how to change from one configuration to another in the correct way. Note: At Simple Injector documentation says that you can ask questions in stackoverflow so that's why I'm here. =P PD: I'm new in this IoC and DI world so try to be easy with me when explaining it :)

    Read the article

  • Node-webkit works on Mac, crashes and can't load module on Windows

    - by user756201
    I've created a full node-webkit app that works fine on the Mac OSX version of node-webkit. Everything works, it loads a key external nodeJS module (marked), and the world is good. However, when I try to run the app on the Windows version of Node-webkit as described in the Wiki, the app crashes immediately (in fact, it crashes immediately when I try all the options: dragging a folder onto nw.exe, dragging an app.nw compressed folder, and running both from the command line). The only thing that gets me closer is opening nw.exe and then pointing the node-webkit location bar to the index file. Then I get this error: Uncaught node.js Error Error: Cannot find module 'marked' I tried commenting out the code that requires marked: var marked = require('marked'); That returns the app to crashing immediately. I assumed it was because of context issues between node.js and the node-webkit browser, but those seem to not be at fault since I tried this suggestion to make sure it finds the correct file for the marked module...went right back to immediate crashing. I'm out of ideas because the crashes don't seem to leave me any way of knowing what the error was.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >