Search Results

Search found 22173 results on 887 pages for 'concerned client'.

Page 547/887 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • Rails Binary Stream support

    - by Craig Walker
    I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is not the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and description to reflect this.

    Read the article

  • Java: Clearing up the confusion on what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • Floating Point Arithmetic - Modulo Operator on Double Type

    - by CrimsonX
    So I'm trying to figure out why the modulo operator is returning such a large unusual value. If I have the code: double result = 1.0d % 0.1d; it will give a result of 0.09999999999999995. I would expect a value of 0 Note this problem doesn't exist using the dividing operator - double result = 1.0d / 0.1d; will give a result of 10.0, meaning that the remainder should be 0. Let me be clear: I'm not surprised that an error exists, I'm surprised that the error is so darn large compared to the numbers at play. 0.0999 ~= 0.1 and 0.1 is on the same order of magnitude as 0.1d and only one order of magnitude away from 1.0d. Its not like you can compare it to a double.epsilon, or say "its equal if its < 0.00001 difference". I've read up on this topic on StackOverflow, in the following posts one two three, amongst others. Can anyone suggest explain why this error is so large? Any any suggestions to avoid running into the problems in the future (I know I could use decimal instead but I'm concerned about the performance of that).

    Read the article

  • What options exist to get text/ list items to appear in columns?

    - by alex
    I have a bunch of HTML markup coming from an external source, and it is mainly h3 elements and ul elements. I want to have the content flow in 3 columns. I know there is something coming in CSS3, but what options do I have to get this content to flow nicely into the 3 columns at the time being? I'm not that concerned about IE6 (as long as it degrades gracefully). Am I stuck using jQuery to parse the markup and chop it up into 3 divs which float? Thank you Update As per request, here is some of the HTML I am working with <h3>Tourism Industry</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> <h3>Small Business</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> And a whole lot more following the same format.

    Read the article

  • codegen:nullValue vs msprop:nullValue

    - by Ken
    Ok, I have datasets that I created way back in 1.1 framework to which we used codegen:nullValue within the XSD to handle null values. However if I open one of these datasets with vs 2005 (i.e. 2.0 framework) and add a column, it removes the codegen setting from the entire xsd but adds in msprop:nullValue However, unlike previous years, I noticed this time the proper property code was NOT over riden from returning the null value specified in codegen as it was doing in the past. Meaning the msprop appears to be creating the proper code behind the scenes (See example). Anyone know of any other differnces? Should I be concerned with deploying a new xsd, WITHOUT the codegen code but instead with the msprop xml? Example: Original creates _ Public Property ParentID() As Integer Get If Me.IsParentIDNull Then Return -1 Else Return CType(Me(Me.tableCompany.ParentIDColumn),Integer) End If End Get Set Me(Me.tableCompany.ParentIDColumn) = value End Set End Property New creates _ Public Property ParentID() As Integer Get If Me.IsParentIDNull Then Return -1 Else Return CType(Me(Me.tableCompany.ParentIDColumn),Integer) End If End Get Set Me(Me.tableCompany.ParentIDColumn) = value End Set End Property BUT is there anything else that might be occuring that I am NOT seeing thus MAKING me re-enter all the codegen settings? THANKS!

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Prototypal inheritance should save memory, right?

    - by Techpriester
    Hi Folks, I've been wondering: Using prototypes in JavaScript should be more memory efficient than attaching every member of an object directly to it for the following reasons: The prototype is just one single object. The instances hold only references to their prototype. Versus: Every instance holds a copy of all the members and methods that are defined by the constructor. I started a little experiment with this: var TestObjectFat = function() { this.number = 42; this.text = randomString(1000); } var TestObjectThin = function() { this.number = 42; } TestObjectThin.prototype.text = randomString(1000); randomString(x) just produces a, well, random String of length x. I then instantiated the objects in large quantities like this: var arr = new Array(); for (var i = 0; i < 1000; i++) { arr.push(new TestObjectFat()); // or new TestObjectThin() } ... and checked the memory usage of the browser process (Google Chrome). I know, that's not very exact... However, in both cases the memory usage went up significantly as expected (about 30MB for TestObjectFat), but the prototype variant used not much less memory (about 26MB for TestObjectThin). I also checked: The TestObjectThin instances contain the same string in their "text" property, so they are really using the property of the prototype. Now, I'm not so sure what to think about this. The prototyping doesn't seem to be the big memory saver at all. I know that prototyping is a great idea for many other reasons, but I'm specifically concerned with memory usage here. Any explanations why the prototype variant uses almost the same amount of memory? Am I missing something?

    Read the article

  • What are the fastest-performing options for a read-only, unordered collection of unique strings?

    - by Dan Tao
    Disclaimer: I realize the totally obvious answer to this question is HashSet<string>. It is absurdly fast, it is unordered, and its values are unique. But I'm just wondering, because HashSet<T> is a mutable class, so it has Add, Remove, etc.; and so I am not sure if the underlying data structure that makes these operations possible makes certain performance sacrifices when it comes to read operations -- in particular, I'm concerned with Contains. Basically, I'm wondering what are the absolute fastest-performing data structures in existence that can supply a Contains method for objects of type string. Within or outside of the .NET framework itself. I'm interested in all kinds of answers, regardless of their limitations. For example I can imagine that some structure might be restricted to strings of a certain length, or may be optimized depending on the problem domain (e.g., range of possible input values), etc. If it exists, I want to hear about it. One last thing: I'm not restricting this to read-only data structures. Obviously any read-write data structure could be embedded inside a read-only wrapper. The only reason I even mentioned the word "read-only" is that I don't have any requirement for a data structure to allow adding, removing, etc. If it has those functions, though, I won't complain.

    Read the article

  • ASP MVC - Routing Required?

    - by evo_9
    I've been reading up on MVC2 which came in VS2010 and it sounds pretty interesting. I'm actually in the middle of a large multi-tenant application project, and have just started coding the UI. I'm considering changing to MVC as I'm not that far along at this point. I have some questions about the Routing capabilities, namely are they required to use MVC or can I more or less ignore Routing? Or do I have to setup a default routing record that will make things work like standard ASPX (as far as routing alone is concerned)? The reason why I don't want to use Routing is because I've already defined a custom URL 'rewrite' mechanism of my own (which fires on session_start). In addition, I'm using jquery and opens-standards for the entire UI, and MVC's aspx overhead-free approach seems like a better fit based on how I've already started to build the application (I am not using viewstate at all, for example). I guess my big concern is whether the routing can be ignored, of if I will have to re-implement my custom URL rewriting to work with MVC, and if that's the case, how would I do that? As a new Routing routine, or stick with the session_start (if that's even possible?). Lastly, I don't want to use anything even remotely 'intelligent/readable' for the url - for a site like StackOverflow, the readability of the URL is a positive, but the opposite is true if it's not a public website like this one. In fact, it would seem to me that the more friendly MVC routing URL (which indirectly show method names) could pose a security risk on a private, non-public website app like I'm developing. For all these reasons I would love to use the lightweight aspects of MVC but skip the Routing entirely - is this possible?

    Read the article

  • Testing ActionFilterAttributes with MSpec

    - by Tomas Lycken
    I'm currently trying to grasp MSpec, mainly to learn new ways of (T/B)DD to be able to make an educated decision on which technology to use. Previously, I've mostly (read: only) used the built-in MSTest framework with Moq, so BDD is quite new for me. I'm writing an ASP.NET MVC app, and I want to implement PRG. Last time I did this, I used action filters to export and import ModelState via TempData, so that I could return a RedirectResult and the validation errors would still be there when the user got the view. I tested that scenario by verifying two things: a) That the ExportModelStateAttribute I had written was applied (among tests for my controller) b) That the attribute worked (among tests for action filter attributes) However, in BDD I've understood I should be even more concerned with behavior, and even less with implementation. This means I should probably just verify that the model state is in tempdata when the action has finished executing - not necessarily that it's done via an attribute. To further complicate things, attributes are not run when calling the action directly in the test, so I can't just call the action and see if the job's been done. How should I spec/test this in MSpec?

    Read the article

  • SQL Query Math Gymnastics

    - by keruilin
    I have two tables of concern here: users and race_weeks. User has many race_weeks, and race_week belongs to User. Therefore, user_id is a fk in the race_weeks table. I need to perform some challenging math on fields in the race_weeks table in order to return users with the most all-time points. Here are the fields that we need to manipulate in the race_weeks table. races_won (int) races_lost (int) races_tied (int) points_won (int, pos or neg) recordable_type(varchar, Robots can race, but we're only concerned about type 'User') Just so that you fully understand the business logic at work here, over the course of a week a user can participate in many races. The race_week record represents the summary results of the user's races for that week. A user is considered active for the week if races_won, races_lost, or races_tied is greater than 0. Otherwise the user is inactive. So here's what we need to do in our query in order to return users with the most points won (actually net_points_won): Calculate each user's net_points_won (not a field in the DB). To calculate net_points, you take (1000 * count_of_active_weeks) - sum(points__won). (Why 1000? Just imagine that every week the user is spotted a 1000 points to compete and enter races. We want to factor-out what we spot the user because the user could enter only one race for the week for 100 points, and be sitting on 900, which we would skew who actually EARNED the most points.) This one is a little convoluted, so let me know if I can clarify further.

    Read the article

  • Extending the .NET type system so the compiler enforces semantic meaning of primitive values in cert

    - by Drew Noakes
    I'm working with geometry a bit at the moment and am converting a lot between degrees and radians. Unfortunately, both of these are represented by double, so there's compile time warning/error if I try to pass a value in degrees where radians are expected. I believe F# has a compile-time solution for this (called units of measure.) I'd like to do something similar in C#. As another example, imagine a SQL library that accepts various query parameters as strings. It'd be good to have a way of enforcing that only clean strings were allowed to be passed in at runtime, and the only way to get a clean string was to pass through some SQL injection attack preventing logic. The obvious solution is to wrap the double/string/whatever in a new type to give it the type information the compiler needs. I'm curious if anyone has an alternative solution. If you do think wrapping is the only/best way, then please go into some of the downsides of the pattern (and any upsides I haven't mentioned too.) I'm especially concerned about the performance of abstracted primitive numeric types on my calculations at runtime.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Java try finally variations

    - by Petr Gladkikh
    This question nags me for a while but I did not found complete answer to it yet (e.g. this one is for C# http://stackoverflow.com/questions/463029/initializing-disposable-resources-outside-or-inside-try-finally). Consider two following Java code fragments: Closeable in = new FileInputStream("data.txt"); try { doSomething(in); } finally { in.close(); } and second variation Closeable in = null; try { in = new FileInputStream("data.txt"); doSomething(in); } finally { if (null != in) in.close(); } The part that worries me is that the thread might be somewhat interrupted between the moment resource is acquired (e.g. file is opened) but resulting value is not assigned to respective local variable. Is there any other scenarios the thread might be interrupted in the point above other than: InterruptedException (e.g. via Thread#interrupt()) or OutOfMemoryError exception is thrown JVM exits (e.g. via kill, System.exit()) Hardware fail (or bug in JVM for complete list :) I have read that second approach is somewhat more "idiomatic" but IMO in the scenario above there's no difference and in all other scenarios they are equal. So the question: What are the differences between the two? Which should I prefer if I do concerned about freeing resources (especially in heavily multi-threading applications)? Why? I would appreciate if anyone points me to parts of Java/JVM specs that support the answers.

    Read the article

  • Transitioning from desktop app written in C++ to a web-based app

    - by Karim
    We have a mature Windows desktop application written in C++. The application's GUI sits on top of a windows DLL that does most of the work for the GUI (it's kind of the engine). It, too, is written in C++. We are considering transitioning the Windows app to be a web-based app for various reasons. What I would like to avoid is having to writing the CGI for this web-based app in C++. That is, I would rather have the power of a 4G language like Python or a .NET language for creating the web-based version of this app. So, the question is: given that I need to use a C++ DLL on the backend to do the work of the app what technology stack would you recommend for sitting between the user's browser and are C++ dll? We can assume that the web server will be Windows. Some options: Write a COM layer on top of the windows DLL which can then be access via .NET and use ASP.NET for the UI Access the export DLL interface directly from .NET and use ASP.NET for the UI. Write a custom Python library that wraps the windows DLL so that the rest of the code can be written. Write the CGI using C++ and a C++-based MVC framework like Wt Concerns: I would rather not use C++ for the web framework if it can be avoided - I think languages like Python and C# are simply more powerful and efficient in terms of development time. I'm concerned that my mixing managed and unmanaged code with one of the .NET solutions I'm asking for lots of little problems that are hard to debug (purely anecdotal evidence for that) Same is true for using a Python layer. Anything that's slightly off the beaten path like that worries me in that I don't have much evidence one way or the other if this is a viable long term solution.

    Read the article

  • Returning Database Blobs in TurboGears 2.x / FCGI / Lighttpd extremely slow

    - by Tom
    Hey everyone, I am running a TG2 App on lighttpd via flup/fastcgi. We are reading images (~30kb each) from BlobFields in a MySQL database and return those images with a custom mime type via a controller method. Caching these images on the hard disk makes no sense because they change with every request, the only reason we cache these in the DB is that creating these images is quite expensive and the data used to create the images is also present in plain text on the website. Now to the problem itself: When returning such an image, things get extremely slow. The code runs totally fine on paster itself with no visible delay, but as soon as its running via fcgi/lighttpd the described phenomenon happens. I profiled the method of my controller that returns my blob, and the entire method runs in a few miliseconds, but when "return" executes, the entire app hangs for roughly 10 seconds. We could not reproduce the same error with PHP on FCGI. This only seems to happen with Turbogears or Pylons. Here for your consideration the concerned piece of source code: @expose(content_type=CUSTOM_CONTENT_TYPE) def return_img(self, img_id): """ Return a DB persisted image when requested """ img = model.Images.by_id(img_id) #get image from DB response.headers['content-type'] = 'image/png' return img.data # this causes the app to hang for 10 seconds

    Read the article

  • One or more rows contain values violating non-null, unique, or foreign-key constraints in SQL Script

    - by Musikero31
    Need help on this. I'm just wondering why this error occurred. Below is the script concerned. SELECT loc.ID ,loc.LocCode ,loc.LocName ,st.StateName ,reg.RegionName ,ctry.CountryName ,ISNULL(CONVERT(DATE, loc.UpdatedDate), CONVERT(DATE,loc.CreatedDate)) AS [ModifiedDate] ,stf.Name AS [ModifiedBy] FROM Spkr_Country AS ctry WITH (NOLOCK) INNER JOIN Spkr_Location AS loc WITH (NOLOCK) ON ctry.ID = loc.CountryID INNER JOIN Spkr_State AS st WITH (NOLOCK) ON loc.StateID = st.ID INNER JOIN Spkr_Region AS reg WITH (NOLOCK) ON loc.RegionID = reg.ID INNER JOIN Staff AS stf ON ISNULL(loc.UpdatedBy, loc.CreatedBy) = stf.StaffId WHERE (loc.IsActive = 1) AND ( (@LocCode = '') OR ( @LocCode <> '' AND loc.LocCode LIKE @LocCode + '%' ) ) AND ( (@RegionID < 1) OR ( @RegionID > 0 AND loc.RegionID = @RegionID ) ) AND ( (@StateID < 1) OR ( @StateID > 0 AND loc.StateID = @StateID ) ) AND ( (@CountryID < 1) OR ( @CountryID > 0 AND loc.CountryID = @CountryID ) ) The error probably occurred here INNER JOIN Staff AS stf ON ISNULL(loc.UpdatedBy, loc.CreatedBy) = stf.StaffId The requirement that I wanted is that if the loc.UpdatedBy is null, it will use the loc.CreatedBy column. However, when I used this, it generated the error mentioned. In the database, the loc.CreatedBy is not null while the loc.UpdatedBy is nullable. I checked it by running the script but it's working fine. How do I do with it? What's wrong with my code? Please help.

    Read the article

  • Using Javascript to generate and save a file

    - by Toji
    I've been fiddling with WebGL lately, and have gotten a Collada reader working. Problem is it's pretty slow (Collada is a very verbose format), so I'm going to start converting files to a easier to use format (probably JSON). Thing is, I already have the code to parse the file in Javascript, so I may as well use it as my exporter too! The problem is saving. Now, I know that I can parse the file, send the result to the server, and have the browser request the file back from the server as a download. But in reality the server has nothing to do with this particular process, so why get it involved? I already have the contents of the desired file in memory. Is there any way that I could present the user with a download using pure javascript? (I doubt it, but might as well ask...) And to be clear: I am not trying to access the filesystem without the users knowledge! The user will provide a file (probably via drag and drop), the script will transform the file in memory, and the user will be prompted to download the result. All of which should be "safe" activities as far as the browser is concerned.

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • Dealing with expired authentication for a partially filled form?

    - by aaronls
    I have a large webform, and would like to prompt the user to login if their session expires, or have them login when they submit the form. It seems that having them login when they submit the form creates alot of challenges because they get redirected to the login page and then the postback data for the original form submission is lost. So I'm thinking about how to prompt them to login asynchrounsly when the session expires. So that they stay on the original form page, have a panel appear telling them the session has expired and they need to login, it submits the login asynchronously, the login panel disapears, and the user is still on the original partially filled form and can submit it. Is this easily doable using the existing ASP.NET Membership controls? When they submit the form will I need to worry about the session key? I mean, I am wondering if the session key the form submits will be the original one from before the session expired which won't match the new one generated after logging in again asynchrounously(I still do not understand the details of how ASP.NET tracks authentication/session IDs). Edit: Yes I am actually concerned about authentication expiration. The user must be authenticated for the submitted data to be considered valid.

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >