Search Results

Search found 21160 results on 847 pages for 'vs 2010'.

Page 389/847 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Entity Framework vs. nHibernate for Performance, Learning Curve overall features

    - by hadi
    I know this has been asked several times and I have read all the posts as well but they all are very old. And considering there have been advancements in versions and releases, I am hoping there might be fresh views. We are building a new application on ASP.NET MVC and need to finalize on an ORM tool. We have never used ORM before and have pretty much boiled down to two - nHibernate & Entity Framework. I really need some advice from someone who has used both these tools and can recommend based on experience. There are three points that I am focusing on to finalize - Performance Learning Curve Overall Capability Your advice will be highly appreciated. Best Regards,

    Read the article

  • Setting ModelView matrix using rotate, translate, etc.. vs setting manual matrix

    - by guymic
    When setting the ModelView matrix you normally go through several transformations from the identity matrix. for example: glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(270.0f, 0.0f, 0.0f, 1.0f); glTranslatef(-rect.size.height / 2, -rect.size.width / 2, 0.0f); Instead of doing those operations one after the other (assume there are more than two), wouldn't it be more efficient to simply pre-calculate the resulting matrix and set the ModelView matrix to this manual matrix?

    Read the article

  • Entity Attribute Value Database vs. strict Relational Model Ecommerce question

    - by Dr. Zim
    It is safe to say that the EAV/CR database model is bad. That said, Question: What database model, technique, or pattern should be used to deal with "classes" of attributes describing e-commerce products which can be changed at run time? In a good E-commerce database, you will store classes of options (like TV resolution then have a resolution for each TV, but the next product may not be a TV and not have "TV resolution"). How do you store them, search efficiently, and allow your users to setup product types with variable fields describing their products? If the search engine finds that customers typically search for TVs based on console depth, you could add console depth to your fields, then add a single depth for each tv product type at run time. There is a nice common feature among good e-commerce apps where they show a set of products, then have "drill down" side menus where you can see "TV Resolution" as a header, and the top five most common TV Resolutions for the found set. You click one and it only shows TVs of that resolution, allowing you to further drill down by selecting other categories on the side menu. These options would be the dynamic product attributes added at run time. Further discussion: So long story short, are there any links out on the Internet or model descriptions that could "academically" fix the following setup? I thank Noel Kennedy for suggesting a category table, but the need may be greater than that. I describe it a different way below, trying to highlight the significance. I may need a viewpoint correction to solve the problem, or I may need to go deeper in to the EAV/CR. Love the positive response to the EAV/CR model. My fellow developers all say what Jeffrey Kemp touched on below: "new entities must be modeled and designed by a professional" (taken out of context, read his response below). The problem is: entities add and remove attributes weekly (search keywords dictate future attributes) new entities arrive weekly (products are assembled from parts) old entities go away weekly (archived, less popular, seasonal) The customer wants to add attributes to the products for two reasons: department / keyword search / comparison chart between like products consumer product configuration before checkout The attributes must have significance, not just a keyword search. If they want to compare all cakes that have a "whipped cream frosting", they can click cakes, click birthday theme, click whipped cream frosting, then check all cakes that are interesting knowing they all have whipped cream frosting. This is not specific to cakes, just an example.

    Read the article

  • Authenticating with Netflix: Netflix OAuth vs. SignPost OAuth: Which is correct?

    - by Stefan Kendall
    With signpost 1.2: String authUrl = provider.retrieveRequestToken( consumer, callbackUrl ); Netflix API response: <status> <status_code> 400 </status_code> <message> oauth_consumer_key is missing </message> </status> I see how to craft the URL manually via the netflix documentation, but this seems to contradict other services which use OAuth authentication. Who's incorrect, here? Is there a way to get signpost to work with Netflix, aside from contributing to the signpost source? :P

    Read the article

  • Showing Different fields in EditorForModel vs. DisplayForModel modes in MVC2

    - by CodeGrue
    I have a viewmodel that has an int StateID field and a string StateName field like so: public class DepartmentViewModel : BaseViewModel, IModelWithId { // only show in edit mode public int StateId { get; set; } // only show in display mode public string StateName { get; set; } } I have a read only view that uses DisplayForModel and an update view that uses EditorForModel. I want the DisplayForModel view to show the StateName property, and the EditorForModel view use the StateID property (I am actually rendering a dropdownlist based on this). I have not been able to figure out how to decorate my viewmodel properties to create this behavior.

    Read the article

  • Storing user info in Session using an Object vs. normal variables

    - by justinl
    I'm in the process of implementing a user authentication system for my website. I'm using an open source library that maintains user information by creating a User object and storing that object inside my php SESSION variable. Is this the best way to store and access that information? I find it a bit of a hassle to access the user variables because I have to create an object to access them first: $userObj = $_SESSION['userObject']; $userObj->userId; instead of just accessing the user id like this how I would usually store the user ID: $_SESSION['userId']; Is there an advantage to storing a bunch of user data as an object instead of just storing them as individual SESSION variables? ps - The library also seems to store a handful of variables inside the user object (id, username, date joined, email, last user db query) but I really don't care to have all that information stored in my session. I only really want to keep the user id and username.

    Read the article

  • Webreference vs servicereference. Only one works ? Serialization ?

    - by phenevo
    Hi, I've got two applications. One uses webreference to my webservice, and second use servicereference to my webservice. There is metohod which I'm invoking: [WebMethod] public Car[] GetCars(string carCode) { Cars[] cars= ModelToContract.ToCars(MyFacade.GetCars(carCode); return cars; } Car has two pools: string Code {get;set;} CarType Type {get;set;} public enum CarType { Van=0, Pickup=1 } I'm debuging this webMethod, and... at the end webservice throw good collection of cars, which has one car: code="bmw",Type.Van But... Application with webrefence receives the same collection and application with servicereference gets collection, where field code is null... Invoking servicereference: MyService myService=new MyService() Cars[] cars= client.GetCars(carcode); Invoking webservice: MyService.MyServiceSoapClient client = new MyServiceS.MyServiceSoapClient(); Cars[] cars= client.GetCars(carcode);

    Read the article

  • jQuery fn.extend ({bla: function(){}} vs. jQuery.fn.bla

    - by tixrus
    OK I think I get http://stackoverflow.com/questions/1991126/difference-jquery-extend-and-jquery-fn-extend in that the general extend can extend any object, and that fn.extend is for plugin functions that can be invoked straight off the jquery object with some internal jquery voodoo. So it appears one would invoke them differently. If you use general extend to extend object obj by adding function y, then the method would attach to that object, obj.y() but if you use fn.extend then they are attach straight to the jquery object $.y().... Have I got that correct yes or no and if no what do I have wrong in my understanding? Now MY question: The book I am reading advocates using jQuery.fn.extend ({a: function(){}, b: function(){}}); syntax but in the docs it says jQuery.fn.a (function(){}); and I guess if you wanted b as well it would be jQuery.fn.b (function(){}); Are these functionally and performance-wise equivalent and if not what is the difference? Thank you very much. I am digging jQuery!

    Read the article

  • Could somebody give me a high-level technical overview of WSGI details behind the scenes vs other we

    - by orokusaki
    Firstly: I understand what WSGI is and how to use it I understand what "other" methods (Apache mod-python, fcgi, et al) are, and how to use them I understand their practical differences What I don't understand is how each of the various "other" methods work compared to something like UWSGI, behind the scenes. Does your server (Nginx, etc) route the request to your WSGI application and UWSGI creates a new Python interpreter for each request routed to it? How much different is is from the other more traditional / monkey patched methods is WSGI (aside from the different, easier Python interface that WSGI offers)? What light bulb moment am I missing?

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • VS Solution and Mercurial repository layout for a c# project with plugins and external libraries.

    - by Joviee
    I'm developing a project in .NET (using C# to be more specific). Using Visual Studio as an IDE. Using Mercurial for version control. I'll be using some third-party libraries: ThirdParty.Foo.dll ThirdParty.Bar.dll ThirdParty.Baz.dll And some in-house libraries: Company.A Company.B Company.C Company.D (References third party libraries) Company.E (References Company.A) The project itself will have the following components: Project.Core Project.DataModel (references in-house/third-party libraries) Project.GUI (references Core, DataModel, and in-house/third-party libraries) Project.PluginOne (references Core, DataModel, and in-house/third-party libraries) Project.PluginTwo (references Core, DataModel, and in-house/third-party libraries) * can be an arbitrary number of plugins * I'm quite new to Mercurial, so I don't really know the best way to structure my repositories for a project like this, with a lot of interconnected components. The in-house libraries are fairly distinct, so I would say that each one of them should have its own repository. However, some of them use functionality provided by others. How should these dependencies be managed? The project plug-ins should be distinct from eachother, so I'd imagine that each would have its own repository. How should the dependencies on the in-house/third-party libraries and the rest of the project (Project.DataModel and Project.Core) be managed, with regards to the solution layout and the repository layout? So basically, for a project like this, what are the best way of structuring: (a) my visual studio solutions (b) my source control repository/repositories

    Read the article

  • Using PHP Gettext Extension vs PHP Arrays in Multilingual Websites?

    - by janoChen
    So far the only 2 good things that I've seen about using gettext instead of arrays is that I don't have to create the "greeting" "sub-array" (or whatever its called). And I don't have to create a folder for the "default language". Are there other pros and cos of using gettext and php arrays for multilingual websites? USING GETTEXT: spanish/messages.po: #: test.php:3 msgid "Hello World!" msgstr "Hola Mundo" index.php: <?php echo _("Hello World!"); ?> index.php?lang=spanish: <?php echo _("Hello World!"); ?> turns to Hola Mundo USING PHP ARRAYS: lang.en.php <?php $lang = array( "greeting" => "Hello World", ); ?> lang.es.php <?php $lang = array( "greeting" => "Hola Mundo", ); ?> index.php: <?php echo $lang['greeting']; ?> greeting turns to Hello World index.php?lang=spanish <?php echo $lang['greeting']; ?> greeting turns to Hola Mundo (I first started with gettext, but it wasn't supported in my shared free hosting Zymic. I didn't want to use Zend_translate, I found it too complicated to my simple task, so I finally ended up using php define, but later on someone told me I should use arrays)

    Read the article

  • Android Studio gradle-###-bin.zip vs. gradle-###-all.zip

    - by Jeff Brateman
    One developer on my team has some setting in Android Studio that replaces the distributionUrl entry in gradle/wrapper/gradle-wrapper.properties to use the gradle-###-all.zip, while my Android Studio changes it back to gradle-###-bin.zip. Basically, my diff always looks like: -distributionUrl=https\://services.gradle.org/distributions/gradle-1.12-all.zip +distributionUrl=https\://services.gradle.org/distributions/gradle-1.12-bin.zip This is annoying. What setting is it, and how do I change it?

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Ruby Regexp: + vs *. special behaviour?

    - by seb
    Using ruby regexp I get the following results: >> 'foobar'[/o+/] => "oo" >> 'foobar'[/o*/] => "" But: >> 'foobar'[/fo+/] => "foo" >> 'foobar'[/fo*/] => "foo" The documentation says: *: zero or more repetitions of the preceding +: one or more repetitions of the preceding So i expect that 'foobar'[/o*/] returns the same result as 'foobar'[/o+/] Does anybody have an explanation for that

    Read the article

  • Eclipse vs. Visual Studio: What are the features in Eclipse that are not present in Visual Studio an

    - by CodeToGlory
    I keep hearing Eclipse is better than or way ahead than Visual studio but when I installed eclipse I felt it is very clunky and hard to use interface. So I want to know what is so great about Eclipse and if there are others who agree with me. I also could not find a similar question that talks about the specific features about Eclipse and their comparison to Visual studio.

    Read the article

  • SOAP UI Pro vs Fitnesse, has anybody used SOAP UI Pro?

    - by Miral
    We are using Fitnesse for subsystem testing i.e. WCF & RESTful services. Now as writing Fitnesse test requires lot of effort, we are thinking of using SOAP UI Pro which gives this sort of facility. We are not 100% sure how much this is useful? Can anyone give suggestion of using SOAP UI against Fitnesse or if somebody has Pros & Cons regarding either of the thing ??

    Read the article

  • ISQL Perform instruction: after editadd editupdate of table vs. after add update of table

    - by Frank Computer
    INFORMIX-SQL 7.3 Perform Screens: According to documentation, in an "after editadd editupdate of table" control block, its instructions are executed before the row is added or updated to the table, whereas in an "after add update of table" control block, its instructions are executed after the row has been added or updated to the table, supposedly meaning that any instructions which would alter values of field-tags linked to table.columns would not be committed to the table, but field-tags linked to displayonly fields will change?.. However, when using "after add update of table" I placed instructions which alter values for field-tags linked to table.columns and their displayed and committed values also changed!.. I would have thought that an "after add update of table" would only alter displayonly fields.

    Read the article

  • “Could not create the file”, “Access is denied” and “Unrecoverable build error” in building setup project in VS 2008

    - by Mister Cook
    When building a setup project I get a message: Error with setup build: Error 27 Could not create the file 'C:\Users\MyName\AppData\Local\Temp\VSI1E1A.tmp' 'Access is denied.' I have tried the following (from http://support.microsoft.com/kb/329214/EN-US) regsvr32 "C:\Program Files (x86)\Common Files\Microsoft Shared\MSI Tools\mergemod.dll" The DLL registers but this does not fix my problem. Also, I tried a Clean build, deleting the temp folder, ran VS2008 as administratror, restart my PC but it occurs every time. I have no anti-virus software running and running on Windows 7 64-bit. This operation worked fine until recently. I have read many other users see this but found no solution. The only half solution I found was to edit setup properties and switch to Package files as Loose uncompressed files. This works but is not ideal as I need a full installer.

    Read the article

  • C file read leaves garbage characters

    - by KJ
    Hi. I'm trying to read the contents of a file into my program but I keep occasionally getting garbage characters at the end of the buffers. I haven't been using C a lot (rather I've been using C++) but I assume it has something to do with streams. I don't really know what to do though. I'm using MinGW. Here is the code (this gives me garbage at the end of the second read): include include char* filetobuf(char *file) { FILE *fptr; long length; char *buf; fptr = fopen(file, "r"); /* Open file for reading */ if (!fptr) /* Return NULL on failure */ return NULL; fseek(fptr, 0, SEEK_END); /* Seek to the end of the file */ length = ftell(fptr); /* Find out how many bytes into the file we are */ buf = (char*)malloc(length+1); /* Allocate a buffer for the entire length of the file and a null terminator */ fseek(fptr, 0, SEEK_SET); /* Go back to the beginning of the file */ fread(buf, length, 1, fptr); /* Read the contents of the file in to the buffer */ fclose(fptr); /* Close the file */ buf[length] = 0; /* Null terminator */ return buf; /* Return the buffer */ } int main() { char* vs; char* fs; vs = filetobuf("testshader.vs"); fs = filetobuf("testshader.fs"); printf("%s\n\n\n%s", vs, fs); free(vs); free(fs); return 0; } The filetobuf function is from this example http://www.opengl.org/wiki/Tutorial2:_VAOs,_VBOs,_Vertex_and_Fragment_Shaders_%28C_/_SDL%29. It seems right to me though. So anyway, what's up with that?

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >