Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 286/998 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • ReflectionTypeLoadException when I try to run Enable-Migrations with Entity Framework 5.0

    - by Eric Anastas
    I'm trying to use Entity Framework for the first time on one of my projects. I'm using the code first workflow to automatically create my database. Intitaly setting up the database worked fine. Now I'm trying to migrate changes in my classes into the database. The tutorial I'm reading says I need to run "Enable-Migrations" in the package manager console. Yet when I do this I get the following error PM> Enable-Migrations System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. at System.Reflection.RuntimeModule.GetTypes(RuntimeModule module) at System.Reflection.RuntimeModule.GetTypes() at System.Reflection.Assembly.GetTypes() at System.Data.Entity.Migrations.Design.ToolingFacade.BaseRunner.FindType[TBase](String typeName, Func`2 filter, Func`2 noType, Func`3 multipleTypes, Func`3 noTypeWithName, Func`3 multipleTypesWithName) at System.Data.Entity.Migrations.Design.ToolingFacade.GetContextTypeRunner.RunCore() at System.Data.Entity.Migrations.Design.ToolingFacade.BaseRunner.Run() Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. What am I doing wrong? How do I retrieve the loader exceptions property? Also NuGet says I have EF 5.0, but Version property of the EntityFramework item in my project references says 4.4.0.0. I'm not sure if this is related.

    Read the article

  • tiemout for a function that waits indefiinitely (like listen())

    - by Fantastic Fourier
    Hello, I'm not quite sure if it's possible to do what I'm about to ask so I thought I'd ask. I have a multi-threaded program where threads share a memory block to communicate necessary information. One of the information is termination of threads where threads constantly check for this value and when the value is changed, they know it's time for pthread_exit(). One of the threads contains listen() function and it seems to wait indefinitely. This can be problematic if there are nobody who wants to make connection and the thread needs to exit but it can't check the value whether thread needs to terminate or not since it's stuck on listen() and can't move beyond. while(1) { listen(); ... if(value == 1) pthread_exit(NULL); } My logic is something like that if it helps illustrate my point better. What I thought would solve the problem is to allow listen() to wait for a duration of time and if nothing happens, it moves on to next statement. Unfortunately, none of two args of listen() involves time limit. I'm not even sure if I'm going about the right way with multi-threaded programming, I'm not much experienced at all. So is this a good approach? Perhaps there is a better way to go about it? Thanks for any insightful comments.

    Read the article

  • architecture - centraled location for different modules (cms, webapplications, ...) - best practise

    - by NicoJuicy
    Let's just say that i want to create a cms + other online applications. I want them all to integrate into a central location, but they also have to be available seperately (not everyone want's more than the cms solution). Would i create a huge central application that contains all the database, which communicates through a webserice with the "standalone - integrated" modules? Or would i create them seperately and the only thing that the "central" application would do is syncing the information (eg. the cms and another solution can have the same tables (eg. clients or employees). Or do you have another idea? (i know i'm a little vague, but i can't "give" a lot of details because of work - contract). If someone has all the "packages" it should be possible for the central application to integrate all the modules at one place! Or if someone has more than 1 module, it should combine this on the website. What i thought is best, is that the central location contains only the users and their rights (eg. cms - all rights, ...), and the information get synced with every change. (module cms, adding a new client - store locally and send data to the central location, central location - send to modules = table clients updated everywhere) This way it is easy if someone only "bought" a module, they can sync it easily through the complete architecture. I hope i made myself clear!

    Read the article

  • SQL Server 2008 R2

    - by kevchadders
    Hi all, I heard on the grapevine that Microsoft will be releasing SQL Server 2008 R2 within a year. Though I initially thought this was a patch for the just released 2008 version, I realised that it’s actually a completely different version that you would have to pay for. (Am I correct, if you had SQL Server 2008, would you have to pay again if you wanted to upgrade to 2008 R2?) If you’re already running SQL Server 2008, would you say it’s still worth the upgrade? Or does it depend on the size of your company and current setup. For what I’ve initially read, I do get the impression that this version would be more useful for the very high end hardware setup where you want to have very good scalability. With regard to programming, is there any extra enhancements/support in there which you’re aware of that will significantly help .NET Products/Web Development? Initially found a couple of links on it, but I was wondering if anyone had anymore info to share on subject as I couldn’t find nothing on SO about it? Thanks. New SQL Server R2 Microsoft Link on it. Microsoft SQL 2008 R2 EDIT: More information based on the Express Edition One very interesting thing about SQL Server 2008 R2 concerns the Express edition. Previous express versions of SQL Server Express had a database size limit of 4GB. With SQL Server Express 2008 R2, this has now been increased to 10GB !! This now makes the FREE express edition a much more viable option for small & medium sized applications that are relatively light on database requirements. Bear in mind, that this limit is per database, so if you coded your application cleverly enough to use a separate database for historical/archived data, you could squeeze even more out of it! For more information, see here: http://blogs.msdn.com/sqlexpress/archive/2010/04/21/database-size-limit-increased-to-10gb-in-sql-server-2008-r2-express.aspx

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • Managing StringBuilder Resources

    - by Jim Fell
    My C# (.NET 2.0) application has a StringBuilder variable with a capacity of 2.5MB. Obviously, I do not want to copy such a large buffer to a larger buffer space every time it fills. By that point, there is so much data in the buffer anyways, removing the older data is a viable option. Can anyone see any obvious problems with how I'm doing this (i.e. am I introducing more performance problems than I'm solving), or does it look okay? tText_c = new StringBuilder(2500000, 2500000); private void AppendToText(string text) { if (tText_c.Length * 100 / tText_c.Capacity > 95) { tText_c.Remove(0, tText_c.Length / 2); } tText_c.Append(text); } EDIT: Additional information: In this application new data is received very rapidly (on the order of milliseconds) through a serial connection. I don't want to populate the multiline textbox with this new information so frequently because that kills the performance of the application, so I'm saving it to a StringBuilder. Every so often, the application copies the contents of the StringBuilder to the textbox and wipes out the StringBuilder contents.

    Read the article

  • Are there any less costly alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Reducing Dimension For SVM in Sensor Network

    - by iinception
    Hi Everyone, I am looking for some suggestions on a problem that I am currently facing. I have a set of sensor say S1-S100 which is triggered when some event E1-E20 is performed. Assume, normally E1 triggers S1-S20, E2 triggers S15-S30, E3 triggers S20-s50 etc and E1-E20 are completely independent events. Occasionally an event E might trigger any other unrelated sensor. I am using ensemble of 20 svm to analyze each event separately. My features are sensor frequency F1-F100, number of times each sensor is triggered and few other related features. I am looking for a technique that can reduce the dimensionality of the sensor feature(F1-F100)/ or some techniques that encompasses all of the sensor and reduces the dimension too(i was looking for some information theory concept for last few days) . I dont think averaging, maximization is a good idea as I risk loosing information(it did not give me good result). Can somebody please suggest what am I missing here? A paper or some starting idea... Thanks in advance.

    Read the article

  • How to efficiently implement a strategy pattern with spring ?

    - by Anth0
    I have a web application developped in J2EE 1.5 with Spring framework. Application contains "dashboards" which are simple pages where a bunch of information are regrouped and where user can modify some status. Managers want me to add a logging system in database for three of theses dashboards. Each dashboard has different information but the log should be traced by date and user's login. What I'd like to do is to implement the Strategy pattern kind of like this : interface DashboardLog { void createLog(String login, Date now); } // Implementation for one dashboard class PrintDashboardLog implements DashboardLog { Integer docId; String status; void createLog(String login, Date now){ // Some code } } class DashboardsManager { DashboardLog logger; String login; Date now; void createLog(){ logger.log(login,now); } } class UpdateDocAction{ DashboardsManager dbManager; void updateSomeField(){ // Some action // Now it's time to log dbManagers.setLogger = new PrintDashboardLog(docId, status); dbManagers.createLog(); } } Is it "correct" (good practice, performance, ...) to do it this way ? Is there a better way ? Note :I did not write basic stuff like constructors and getter/setter.

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • XML Doc to JSP to TIFF

    - by SPD
    We have around 100 word templates, every time user gets a business request he goes into shared folder select the template he/she wants and enter information and save it as tiff, these tiffs later processed by some batch program. I am trying to automate this process. So I defined an XML which has Template information like <Template id="1"> <Section id="1"> <fieldName id="1">Date</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> <Section id="2"> <fieldName id="2">Claim#</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> </Template> Based on the template values I generate the JSP on fly. Now I would like to generate a TIFF file out of it in specified format. I am not sure how to handle this requirement. *edited the original question.

    Read the article

  • Check a list of packages to install with apt-get

    - by Joel
    I am writing a post-install script for Ubuntu in Perl (same script as seen here). One of the steps is to install a list of packages. The problem is that if apt-get install fails in some of many different ways for any one of the packages the script dies badly. I would like to prevent that from happening. This happens because of the ways that apt-get install fails for packages that it doesn't like. For example when I try to install a nonsense word (i.e. typed in the wrong package name) $ sudo apt-get install oblihbyvl Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package oblihbyvl but if instead the package name has been obsoleted (installing handbrake from ppa) $ sudo apt-get install handbrake Reading package lists... Done Building dependency tree Reading state information... Done Package handbrake is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'handbrake' has no installation candidate $ apt-cache search handbrake handbrake-cli - versatile DVD ripper and video transcoder - command line handbrake-gtk - versatile DVD ripper and video transcoder - GTK GUI I have tried parsing the results of apt-cache and apt-get -s install to try to catch all possibilities before doing the install, but I seem to keep finding new ways to allow failures to continue to the actual install system command. My question is, is there some facility either in Perl (e.g. a module, though I would like to avoid installing modules if possible as this is supposed to be the first thing run after a new install of Ubuntu) or apt-* or dpkg that would let me be sure that the packages are all available to be installed before installing and if not fail gracefully in some way that lets the user decide what to do?

    Read the article

  • best practices question: How to save a collection of images and a java object in a single file? File

    - by Richard
    Hi all, I am making a java program that has a collection of flash-card like objects. I store the objects in a jtree composed of defaultmutabletreenodes. Each node has a user object attached to it with has a few string/native data type parameters. However, i also want each of these objects to have an image (typical formats, jpg, png etc). I would like to be able to store all of this information, including the images and the tree data to the disk in a single file so the file can be transferred between users and the entire tree, including the images and parameters for each object, can be reconstructed. I had not approached a problem like this before so I was not sure what the best practices were. I found XLMEncoder (http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html) to be a very effective way of storing my tree and the native data type information. However I couldn't figure out how to save the image data itself inside of the XML file, and I'm not sure it is possible since the data is binary (so restricted characters would be invalid). My next thought was to associate a hash string instead of an image within each user object, and then gzip together all of the images, with the hash strings as the names and the XMLencoded tree in the same compmressed file. That seemed really contrived though. Does anyone know a good approach for this type of issue? THanks! Thanks!

    Read the article

  • Performing centralized authorization for multiple applications

    - by Vaibhav
    Here's a question that I have been wrestling with for a while. We have a situation wherein we have a number of applications that we have created. These have grown organically over a period of time. All of these applications have permissions code built into them that controls access to various parts of the application depending on whether the currently logged in user has the necessary permissions or not. Alongside these applications is a utility application which allows an administrator to map users to permissions for all applications - the way it works is that every application has code which reads this external database of the said utility application to check if the currently logged in user has the necessary permission or not. Now, the question is this. Should the user-permissions mapping information reside in and be owned by the applications themselves, or is it okay to have this information reside within an external entity/DB (as in this case the utility application's database). Part of me thinks that application permissions are very specific to the application context itself, so shouldn't be separated from the application itself. But I am not sure. Any comments?

    Read the article

  • why create CLSID_CaptureGraphBuilder2 instance always failed in a machine

    - by Yigang Wu
    It's a real strange issue, the machine information below is from DXDiag. There is no error reported, but create CLSID_CaptureGraphBuilder2 instance always failed in the machine. It's okay to create CLSID_FilterGraph. Before create CLSID_CaptureGraphBuilder2, I have called CoInitialize and created CLSID_FilterGraph. Only this machine has the error, what dll related with this interface or any function needed to call before to make it work? Thanks in advance. System Information Time of this report: 4/24/2010, 09:46:58 Machine name: TURION Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_qfe.100216-1510) Language: Japanese (Regional Setting: Japanese) System Manufacturer: To Be Filled By O.E.M. System Model: MS-7145 BIOS: Default System BIOS Processor: AMD Turion(tm) 64 Mobile Technology MT-30, MMX, 3DNow, ~1.6GHz Memory: 768MB RAM Page File: 376MB used, 1401MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode DxDiag Notes DirectX Files Tab: No problems found. Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Music Tab: No problems found. Input Tab: No problems found. Network Tab: No problems found.

    Read the article

  • Are there any less costlier alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • optimistic and pessimistic locks

    - by billmce
    Working on my first php/Codeigniter project and I’ve scoured the ‘net for information on locking access to editing data and haven’t found very much information. I expect it to be a fairly regular occurrence for 2 users to attempt to edit the same form simultaneously. My experience (in the stateful world of BBx, filePro, and other RAD apps) is that the data being edited is locked using a pessimistic lock—one user has access to the edit form at the time. The second user basically has to wait for the first to finish. I understand this can be done using Ajax sending XMLHttpRequests to maintain a ‘lock’ database. The php world, lacking state, seems to prefer optimistic locking. If I understand it correctly it works like this: both users get to access the data and they each record a ‘before changes’ version of the data. Before saving their changes, the data is once again retrieved and compared the ‘before changes’ version. If the two versions are identical then the users changes are written. If they are different; the user is shown what has changed since he/she started editing and some mechanism is added to resolve the differences—or the user is shown a ‘Sorry, try again’ message. I’m interested in any experience people here have had with implementing both pessimistic and optimistic locking. If there are any libraries, tools, or ‘how-to’s available I’m appreciate a link. Thanks

    Read the article

  • Reading lines of data from text files using java

    - by razshan
    I have a text file with x amount of lines. each line holds a integer number. When the user clicks a button, an action is performed via actionlistener where it should list all the values as displayed on the text file. However, right now I have linenum set to 10 implying I already told the code that just work with 10 lines of the text file. So, if my text file has only 3 rows/lines of data...it will list those lines and for rest of the other 7 lines, it will spit out "null". I recall there is a way to use ellipsis to let the program know that you don't know the exact value but at the end it calculates it based on the given information. Where my given information will the number of lines with numbers(data). Below is part of the code. private class thehandler implements ActionListener{ public void actionPerformed(ActionEvent event){ BufferedReader inputFile=null; try { FileReader freader =new FileReader("Data.txt"); inputFile = new BufferedReader(freader); String MAP = ""; int linenum=10; while(linenum > 0) { linenum=linenum-1; MAP = inputFile.readLine();//read the next line until the specfic line is found System.out.println(MAP); } } catch( Exception y ) { y.printStackTrace(); } }}

    Read the article

  • PHP class to C# class?

    - by LordSauron
    I work for a company that makes application's in C#. recently we got a customer asking us to look in to rebuilding an application written in PHP. This application receives GPS data from car mounted boxes and processes that into workable information. The manufacturer for the GPS device has a PHP class that parses the received information and extracts coordinates. We were looking in to rewriting the PHP class to a C# class so we can use it and adapt it. And here it comes, on the manufacturers website there is a singel line of text that got my skin krawling: "The encoding format and contents of the transmitted data are subject to constant changes. This is caused by implementations of additional features by new module firmware versions which makes it virtually impossible to document it and for you to properly decode it yourself." So i am now looking for a option to use the "constantly changing" PHP class and access it in C#. Some thing link a shell only exposing some function's i need. Except i have no idea how i can do this. Can any one help me find a solution for this.

    Read the article

  • Smart way to execute function in flash from an imported html (or XML) plain text

    - by DomingoSL
    Ok, here is the think. I have a very simple flash program code in AS3, inside this program there is a dynamic textfield html capable. Also i have a database were i will put some information. Each element in the database can be linked to other. For example, this database will contain tourist spots, and they can be related with others in the same database. Ramdom place 1 This interesting spot is near <link to="ramdo2">ramdom place2</link>. And so, so so... The information in the database is retrived by the flash application and it shows the text on the dynamic textfield. I need somehow to tell my flash application that have to be a link to other part of the database, so when the user click the link a function in flash call this new data. The database is a simple Mysql server, and the data is not yet in there, waiting for your suggestions of how to format it and develop the solution. Im a php developer too, so i can make a gateway in PHP to read the MySQL and then retrive some format to flash, an XML for example.

    Read the article

  • How to configure an index.htm file in IIS?

    - by salvationishere
    I am running IIS 6.0 on an XP OS using VS 2008 and SQL Server 2008 (Full install). I developed two web apps. Both of these I can run from IIS by setting them to the default website. However, now I tried adding an index.htm file. Real simple; all it has is two hyperlinks to these web apps. But now only the first web app works. The first web app is pure VS. The second web app modifies an Adventureworks database table. But now when I click the hyperlink for the second web app, it gives me the error below. However this error doesn't make sense to me cause I have the two web apps configured as two virtual directories beneath C:\inetpub\ and the index.htm file is also beneath C:\inetpub. And the default website is set to home directory C:\inetpub\ with Document index.htm on top. Also, why does the first web app work and not the second now? Server Error in '/AddFileToSQL' Application. The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

    Read the article

  • Entity Framework does not map 2 columns from a SqlQuery calling a stored procedure

    - by user1783530
    I'm using Code First and am trying to call a stored procedure and have it map to one of my classes. I created a stored procedure, BOMComponentChild, that returns details of a Component with information of its hierarchy in PartsPath and MyPath. I have a class for the output of this stored procedure. I'm having an issue where everything except the two columns, PartsPath and MyPath, are being mapped correctly with these two properties ending up as Nothing. I searched around and from my understanding the mapping bypasses any Entity Framework name mapping and uses column name to property name. The names are the same and I'm not sure why it is only these two columns. The last part of the stored procedure is: SELECT t.ParentID ,t.ComponentID ,c.PartNumber ,t.PartsPath ,t.MyPath ,t.Layer ,c.[Description] ,loc.LocationID ,loc.LocationName ,CASE WHEN sup.SupplierID IS NULL THEN 1 ELSE sup.SupplierID END AS SupplierID ,CASE WHEN sup.SupplierName IS NULL THEN 'Scinomix' ELSE sup.SupplierName END AS SupplierName ,c.Active ,c.QA ,c.IsAssembly ,c.IsPurchasable ,c.IsMachined ,t.QtyRequired ,t.TotalQty FROM BuildProducts t INNER JOIN [dbo].[BOMComponent] c ON c.ComponentID = t.ComponentID LEFT JOIN [dbo].[BOMSupplier] bsup ON bsup.ComponentID = t.ComponentID AND bsup.IsDefault = 1 LEFT JOIN [dbo].[LookupSupplier] sup ON sup.SupplierID = bsup.SupplierID LEFT JOIN [dbo].[LookupLocation] loc ON loc.LocationID = c.LocationID WHERE (@IsAssembly IS NULL OR IsAssembly = @IsAssembly) ORDER BY t.MyPath and the class it maps to is: Public Class BOMComponentChild Public Property ParentID As Nullable(Of Integer) Public Property ComponentID As Integer Public Property PartNumber As String Public Property MyPath As String Public Property PartsPath As String Public Property Layer As Integer Public Property Description As String Public Property LocationID As Integer Public Property LocationName As String Public Property SupplierID As Integer Public Property SupplierName As String Public Property Active As Boolean Public Property QA As Boolean Public Property IsAssembly As Boolean Public Property IsPurchasable As Boolean Public Property IsMachined As Boolean Public Property QtyRequired As Integer Public Property TotalQty As Integer Public Property Children As IDictionary(Of String, BOMComponentChild) = New Dictionary(Of String, BOMComponentChild) End Class I am trying to call it like this: Me.Database.SqlQuery(Of BOMComponentChild)("EXEC [BOMComponentChild] @ComponentID, @PathPrefix, @IsAssembly", params).ToList() When I run the stored procedure in management studio, the columns are correct and not null. I just can't figure out why these won't map as they are the important information in the stored procedure. The types for PartsPath and MyPath are varchar(50).

    Read the article

  • Creating a network adapter - how hard is it?

    - by Vilx-
    I'm interested in building a little (commercial) device on top of Arduino. I want it to be able to interface with network. Network as in standard Ethernet, Cat5, RJ-45, etc. I know that there is an Ethernet Shield, but it costs even more than the Arduino itself, and it's pretty big. Naturally, I want my device to be as small and as cheap as possible. So I'm thinking about recreating an Ethernet module myself. The problem is - I haven't got any experience with Ethernet, nor do I have a good idea where to start looking. Thus I can't even say if my ideas are feasible. Ultimately I would like the device to have three ports - one for incoming signal, two for outgoing, so the device is essentially a little switch where it is plugged in itself as well. The switching capabilities need not be very fast - the volume of data will be low. 10Mbit is more than enough, can be even slower. If that is not possible, a single port for controlling the device itself will also do. Another possibility I'm considering is power line communications - sending information through power lines. That's another area I've no experience with. What hardware should I be looking at, and where can I find information about the necessary software? So - can anyone tell me if these ideas are feasible, and if yes - where should I start looking?

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >