Search Results

Search found 5623 results on 225 pages for 'prevent deletion'.

Page 195/225 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Multiple dispatching issue

    - by user1440263
    I try to be synthetic: I'm dispatching an event from a MovieClip (customized symbol in library) this way: public function _onMouseDown(e:MouseEvent){ var obj = {targetClips:["tondo"],functionString:"testFF"}; dispatchEvent(new BridgeEvent(BridgeEvent.BRIDGE_DATA,obj)); } The BridgeEvent class is the following: package events { import flash.events.EventDispatcher; import flash.events.Event; public class BridgeEvent extends Event { public static const BRIDGE_DATA:String = "BridgeData"; public var data:*; public function BridgeEvent(type:String, data:*) { this.data = data; super(type, true); } } } The document class listens to the event this way: addEventListener(BridgeEvent.BRIDGE_DATA,eventSwitcher); In eventSwitcher method I have a simple trace("received"). What happens: when I click the MovieClip the trace action gets duplicated and the output window writes many "received" (even if the click is only one). What happens? How do I prevent this behaviour? What is causing this? Any help is appreciated. [SOLVED] I'm sorry, you will not believe this. A colleague, to make me a joke, converted the MOUSE_DOWN handler to MOUSE_OVER.

    Read the article

  • VS2010 Web Deploy: how to remove absolute paths and automate setAcl?

    - by Julien Lebosquain
    The integrated Web Deployment in Visual Studio 2010 is pretty nice. It can create a package ready to be deployed using MSDeploy on a target IIS machine. Problem is, this package will be redistributed to a client that will install it himself using the "Import Application" from IIS when MSDeploy is installed. The default package created always include the full path from the development machine, "D:\Dev\XXX\obj\Debug\Package\PackageTmp" in the source manifest file. It doesn't prevent installation of course since it was designed this way, but it looks ugly in the import dialog and has no meaning to the client. Worse he will wonder what are those paths and it looks quite confusing. By customizing the .csproj file (by adding MSBuild properties used by the package creation task), I managed to add additional parameters to the package. However, I spent most of the afternoon in the 2600 lines long Web.Publishing.targets trying to understand what parameter influenced the "development path" behavior, in vain. I also tried to use the setAcl to customize security on a given folder after deployment, but I only managed to do this with MSBuild by using a relative path... it shouldn't matter if I resolve the first problem though. I could modify the generated archive after its creation but I would prefer if everything was automatized using MSBuild. Does anyone know how to do that?

    Read the article

  • multi-shop orders table and sequential order numbers based on shop

    - by imanc
    Hey, I am looking at building a shop solution that needs to be scalable. Currently it retrieves 1-2000 orders on average per day across multiple country based shops (e.g. uk, us, de, dk, es etc.) but this order could be 10x this amount in two years. I am looking at either using separate country-shop databases to store the orders tables, or looking to combine all into one order table. If all orders exist in one table with a global ID (auto num) and country ID (e.g uk,de,dk etc.), each countries orders would also need to have sequential ordering. So in essence, we'd have to have a global ID and a country order ID, with the country order ID being sequential for countries only, e.g. global ID = 1000, country = UK, country order ID = 1000 global ID = 1001, country = DE, country order ID = 1000 global ID = 1002, country = DE, country order ID = 1001 global ID = 1003, country = DE, country order ID = 1002 global ID = 1004, country = UK, country order ID = 1001 THe global ID would be DB generated and not something I would need to worry about. But I am thinking that I'd have to do a query to get the current country order based ID+1 to find the next sequential number. Two things concern me about this: 1) query times when the table has potentially millions of rows of data and I'm doing a read before a write, 2) the potential for ID number clashes due to simultaneous writes/reads. With a MyISAM table the entire table could be locked whilst the last country order + 1 is retrieved, to prevent ID number clashes. I am wondering if anyone knows of a more elegant solution? Cheers, imanc

    Read the article

  • Producer consumer with qualifications

    - by tgguy
    I am new to clojure and am trying to understand how to properly use its concurrency features, so any critique/suggestions is appreciated. So I am trying to write a small test program in clojure that works as follows: there 5 producers and 2 consumers a producer waits for a random time and then pushes a number onto a shared queue. a consumer should pull a number off the queue as soon as the queue is nonempty and then sleep for a short time to simulate doing work the consumers should block when the queue is empty producers should block when the queue has more than 4 items in it to prevent it from growing huge Here is my plan for each step above: the producers and consumers will be agents that don't really care for their state (just nil values or something); i just use the agents to send-off a "consumer" or "producer" function to do at some time. Then the shared queue will be (def queue (ref [])). Perhaps this should be an atom though? in the "producer" agent function, simply (Thread/sleep (rand-int 1000)) and then (dosync (alter queue conj (rand-int 100))) to push onto the queue. I am thinking to make the consumer agents watch the queue for changes with add-watcher. Not sure about this though..it will wake up the consumers on any change, even if the change came from a consumer pulling something off (possibly making it empty) . Perhaps checking for this in the watcher function is sufficient. Another problem I see is that if all consumers are busy, then what happens when a producer adds something new to the queue? Does the watched event get queued up on some consumer agent or does it disappear? see above I really don't know how to do this. I heard that clojure's seque may be useful, but I couldn't find enough doc on how to use it and my initial testing didn't seem to work (sorry don't have the code on me anymore)

    Read the article

  • Create new or update existing entity at one go with JPA

    - by Alex R
    A have a JPA entity that has timestamp field and is distinguished by a complex identifier field. What I need is to update timestamp in an entity that has already been stored, otherwise create and store new entity with the current timestamp. As it turns out the task is not as simple as it seems from the first sight. The problem is that in concurrent environment I get nasty "Unique index or primary key violation" exception. Here's my code: // Load existing entity, if any. Entity e = entityManager.find(Entity.class, id); if (e == null) { // Could not find entity with the specified id in the database, so create new one. e = entityManager.merge(new Entity(id)); } // Set current time... e.setTimestamp(new Date()); // ...and finally save entity. entityManager.flush(); Please note that in this example entity identifier is not generated on insert, it is known in advance. When two or more of threads run this block of code in parallel, they may simultaneously get null from entityManager.find(Entity.class, id) method call, so they will attempt to save two or more entities at the same time, with the same identifier resulting in error. I think that there are few solutions to the problem. Sure I could synchronize this code block with a global lock to prevent concurrent access to the database, but would it be the most efficient way? Some databases support very handy MERGE statement that updates existing or creates new row if none exists. But I doubt that OpenJPA (JPA implementation of my choice) supports it. Event if JPA does not support SQL MERGE, I can always fall back to plain old JDBC and do whatever I want with the database. But I don't want to leave comfortable API and mess with hairy JDBC+SQL combination. There is a magic trick to fix it using standard JPA API only, but I don't know it yet. Please help.

    Read the article

  • INSERT stored procedure does not work?

    - by vikitor
    Hello, I'm trying to make an insertion from one database called suspension to the table called Notification in the ANimals database. My stored procedure is this: ALTER PROCEDURE [dbo].[spCreateNotification] -- Add the parameters for the stored procedure here @notRecID int, @notName nvarchar(50), @notRecStatus nvarchar(1), @notAdded smalldatetime, @notByWho int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here INSERT INTO Animals.dbo.Notification values (@notRecID, @notName, @notRecStatus, null, @notAdded, @notByWho); END The null inserting is to replenish one column that otherwise will not be filled, I've tried different ways, like using also the names for the columns after the name of the table and then only indicate in values the fields I've got. I know it is not a problem of the stored procedure because I executed it from the sql server management studio and it works introducing the parameters. Then I guess the problem must be in the repository when I call the stored procedure: public void createNotification(Notification not) { try { DB.spCreateNotification(not.NotRecID, not.NotName, not.NotRecStatus, (DateTime)not.NotAdded, (int)not.NotByWho); } catch { return; } } It does not record the value in the database. I've been debugging and getting mad about this, because it works when I execute it manually, but not when I automatize the proccess in my application. Does anyone see anything wrong with my code? Thank you

    Read the article

  • What is happening in Crockford's object creation technique?

    - by Chris Noe
    There are only 3 lines of code, and yet I'm having trouble fully grasping this: Object.create = function (o) { function F() {} F.prototype = o; return new F(); }; newObject = Object.create(oldObject); (from Prototypal Inheritance) 1) Object.create() starts out by creating an empty function called F. I'm thinking that a function is a kind of object. Where is this F object being stored? Globally I guess. 2) Next our oldObject, passed in as o, becomes the prototype of function F. Function (i.e., object) F now "inherits" from our oldObject, in the sense that name resolution will route through it. Good, but I'm curious what the default prototype is for an object, Object? Is that also true for a function-object? 3) Finally, F is instantiated and returned, becoming our newObject. Is the "new" operation strictly necessary here? Doesn't F already provide what we need, or is there a critical difference between function-objects and non-function-objects? Clearly it won't be possible to have a constructor function using this technique. What happens the next time Object.create() is called? Is global function F overwritten? Surely it is not reused, because that would alter previously configured objects. And what happens if multiple threads call Object.create(), is there any sort of synchronization to prevent race conditions on F?

    Read the article

  • Sudoku Recursion Issue (Java)

    - by SkylineAddict
    I'm having an issue with creating a random Sudoku grid. I tried modifying a recursive pattern that I used to solve the puzzle. The puzzle itself is a two dimensional integer array. This is what I have (By the way, the method doesn't only randomize the first row. I had an idea to randomize the first row, then just decided to do the whole grid): public boolean randomizeFirstRow(int row, int col){ Random rGen = new Random(); if(row == 9){ return true; } else{ boolean res; for(int ndx = rGen.nextInt() + 1; ndx <= 9;){ //Input values into the boxes sGrid[row][col] = ndx; //Then test to see if the value is valid if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)){ // grid valid, move to the next cell if(col + 1 < 9){ res = randomizeFirstRow(row, col+1); } else{ res = randomizeFirstRow( row+1, 0); } //If the value inputed is valid, restart loop if(res == true){ return true; } } } } //If no value can be put in, set value to 0 to prevent program counting to 9 setGridValue(row, col, 0); //Return to previous method in stack return false; } This results in an ArrayIndexOutOfBoundsException with a ridiculously high or low number (+- 100,000). I've tried to see how far it goes into the method, and it never goes beyond this line: if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)) I don't understand how the array index goes so high. Can anyone help me out?

    Read the article

  • What about race condition in multithreaded reading?

    - by themoob
    Hi, According to an article on IBM.com, "a race condition is a situation in which two or more threads or processes are reading or writing some shared data, and the final result depends on the timing of how the threads are scheduled. Race conditions can lead to unpredictable results and subtle program bugs." . Although the article concerns Java, I have in general been taught the same definition. As far as I know, simple operation of reading from RAM is composed of setting the states of specific input lines (address, read etc.) and reading the states of output lines. This is an operation that obviously cannot be executed simultaneously by two devices and has to be serialized. Now let's suppose we have a situation when a couple of threads access an object in memory. In theory, this access should be serialized in order to prevent race conditions. But e.g. the readers/writers algorithm assumes that an arbitrary number of readers can use the shared memory at the same time. So, the question is: does one have to implement an exclusive lock for read when using multithreading (in WinAPI e.g.)? If not, why? Where is this control implemented - OS, hardware? Best regards, Kuba

    Read the article

  • JBoss 6 unpacks jars from WEB-INF/lib of war

    - by Maxym
    when I start JBoss 6 I see that it unpacks all jar files from WEB-INF/lib in tmp/vfs/automountXXX folder. E.g. jackrabbit-server.war contains library asm-3.1.jar, then in tmp folder I see the following folders with files: asm-3.1.jar-83dc35ead0d41d41/asm-3.1.jar asm-3.1.jar-2a48f1c13ec7f25d/contents/"unpacked asm-3.1.jar" it does not take files from my.ear/lib only WEB-INF/lib... Why is it so? And is here any way to prevent it doing so? It just slows down application server starting (and stopping), what is not that comfortable at development... If it is somehow related to JavaEE 6 specification and ejb-jars, which can be located now in WEB-INF/lib, so I don't have such libraries in my war files... UPDATE: actually when I repack jackrabbit-server.war to jackrabbit-server.ear which contains jackrabbit-server.war and moved all its libraries to jackrabbit-server.ear/lib then I still see two folders in tmp: asm-3.1.jar-215a36131ebb088e/asm-3.1.jar asm-3.1.jar-14695f157664f00/contents/ but in this case last folder is empty. So it still creates two folders, but does not unpack my library. Also I use exploded deployment so the question is only about jar files, not unpacking ear/war.

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • How important is it to use SSL on every page of your website?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • IIS 6+ASP.NET - many temp files generated

    - by moshe_ravid
    I have a ASP.NET + some .NET web-services running on IIS 6 (win 2003 server). The issue is that IIS is generating a lot (!) of files in "c:\WINDOWS\Temp" directory. a lot of files means thousands of files, which get to more than 3G of size so far. The files are generated by this command: C:\WINDOWS\SysWOW64\inetsrv "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe" /t:library /utf8output /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\vfagt\113819dd\db0d5802\assembly\dl3\fedc6ef1\006e24d8_3bc9ca01\VfAgentWService.DLL" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Web.Services\2.0.0.0__b03f5f7f11d50a3a\System.Web.Services.dll" /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e089\System.dll" /out:"C:\WINDOWS\TEMP\9i_i2bmg.dll" /debug- /optimize+ /nostdlib /D:_DYNAMIC_XMLSERIALIZER_COMPILATION "C:\WINDOWS\TEMP\9i_i2bmg.0.cs" The files in the temp directory are pairs of *.out & *.err, where the *.err file is zero size, and the *.out file contains the compilation output messages. What is causing IIS to generate so many files? How can I prevent it? UPDATE: The problem is that the command i described above (csc.exe) is being executed many (many) times, causing the .out & .err to be generated so many times, until it consumes the disk space. So - my question is: what is causing this command to run so many times? (i don't have that many .aspx & .asmx files in my web app). Thanks, Moe

    Read the article

  • How to write a good PHP database insert using an associative array

    - by Tom
    In PHP, I want to insert into a database using data contained in a associative array of field/value pairs. Example: $_fields = array('field1'=>'value1','field2'=>'value2','field3'=>'value3'); The resulting SQL insert should look as follows: INSERT INTO table (field1,field2,field3) VALUES ('value1','value2','value3'); I have come up with the following PHP one-liner: mysql_query("INSERT INTO table (".implode(',',array_keys($_fields)).") VALUES (".implode(',',array_values($_fields)).")"); It separates the keys and values of the the associative array and implodes to generate a comma-separated string . The problem is that it does not escape or quote the values that were inserted into the database. To illustrate the danger, Imagine if $_fields contained the following: $_fields = array('field1'=>"naustyvalue); drop table members; --"); The following SQL would be generated: INSERT INTO table (field1) VALUES (naustyvalue); drop table members; --; Luckily, multiple queries are not supported, nevertheless quoting and escaping are essential to prevent SQL injection vulnerabilities. How do you write your PHP Mysql Inserts? Note: PDO or mysqli prepared queries aren't currently an option for me because the codebase already uses mysql extensively - a change is planned but it'd take alot of resources to convert?

    Read the article

  • Outlook Interop: Password protected PST file headache

    - by Ed Manet
    Okay, I have no problem identifying the .PST file using the Outlook Interop assemblies in a C# app. But as soon as I hit a password protected file, I am prompted for a password. We are in the process of disabling the use of PSTs in our organization and one of the steps is to unload the PST files from the users' Outlook profile. I need to have this app run silently and not prompt the user. Any ideas? Is there a way to create the Outlook.Application object with no UI and then just try to catch an Exception on password protected files? // create the app and namespace Application olApp = new Application(); NameSpace olMAPI = olApp.GetNamespace("MAPI"); // get the storeID of the default inbox string rootStoreID = olMAPI.GetDefaultFolder(OlDefaultFolders.olFolderInbox).StoreID; // loop thru each of the folders foreach (MAPIFolder fo in olMAPI.Folders) { // compare the first 75 chars of the storeid // to prevent removing the Inbox folder. string s1 = rootStoreID.Substring(1, 75); string s2 = fo.StoreID.Substring(1, 75); if (s1 != s2) { // unload the folder olMAPI.RemoveStore(fo); } } olApp.Quit();

    Read the article

  • mysql image disable print download

    - by Vish
    Hi, We use a Flex AIR client and a WAMP server. Tiff images are stored in MySQL. Currently, I can download the image from AIR client and it prompts for a download dialog. Things are fine till this point. We got a new requirement. Requirement is that only some users can print the image which gets downloaded. For other users, they should not be able to print the tiff image. Wondering how to accomplish this. One idea, not sure if its efficient, is to convert the image requested to pdf at the server side, disable print option there(hope there are API's available) and send back the pdf. Please let me know btter ideas. Also, is there a way to prevent file download dialog from popping up everytime the file is requested for download? Can we just get the file stream to the client and manipulate it to open with a particular viewer or write it to pdf... Please help.

    Read the article

  • Details View and integration with TinyMCE <%@ Page validateRequest="false" %>

    - by GibboK
    I use TinyMCE in a DetailView in in EDIT MODE. I would like to know if there is a solution which can prevent Request Validation to trigger an error WITHOUT USING <%@ Page validateRequest="false" %> for my page. The only way I found out at the moment is to encode TextBox used by TinyMCE using option: "xml" tinyMCE.init({ encoding: "xml", In this way Request Validation does not trigger error but at the time to read the data in the TextBox the result it is Encoded. I also tried to Decode on PageLoad the content of the TextBox using this code myTextBox.Text = HttpUtility.HtmlDecode(myTextBox.Text) But the result is not as expected, so I can visualize it just Encoded text. Any Ideas? Thanks UPDATE I found a solution to my problem. I added in _DataBound event for the DetailsView this code TextBox myContentAuthor = (TextBox)uxAuthorListDetailsView.FindControl("uxContentAuthorInput"); myContentAuthor.Text = HttpUtility.HtmlDecode(myContentAuthor.Text); So on DataBound event, (should work even on post back) the content will be decodene for textbox tinymce. Here how should work: 01 - TinyMCE ESCAPE data inserted in textbox using function encoding: "xml", 02 - Data has been stored as ESCAPED 03 - To read the data and add its content to a TextBox where apply TinyMCE use in DATABOUND EVENT for DetailView and HttpUtility.HtmlDecode (so it will look decoded) 04 - You can modify content in the textbox in edit mode. On post back TinyMCE will encoded again using encoding: "xml" an so on Hope guys can help some one else. But please give me your comment on this solution thanks! Mybe you come up with more elegant solution! :-)

    Read the article

  • temporary tables within stored procedures on slave servers with readonly set

    - by lau
    Hi, We have set up a replication scheme master/slave and we've had problems lately because some users wrote directly on the slave instead of the master, making the whole setup inconsistent. To prevent these problems from happening again, we've decided to remove the insert, delete, update, etc... rights from the users accessing the slave. Problems is that some stored procedure (for reading) require temporary tables. I read that changing the global variable read_only to true would do what I want and allow the stored procedures to work correctly ( http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_read_only ) but I keep getting the error : The MySQL server is running with the --read-only option so it cannot execute this statement (1290) The stored procedure that I used (for testing purpose) is this one : DELIMITER $$ DROP PROCEDURE IF EXISTS test_readonly $$ CREATE DEFINER=dbuser@% PROCEDURE test_readonly() BEGIN CREATE TEMPORARY TABLE IF NOT EXISTS temp ( BT_INDEX int(11), BT_DESC VARCHAR(10) ); INSERT INTO temp (BT_INDEX, BT_DESC) VALUES (222,'walou'), (111,'bidouille'); DROP TABLE temp; END $$ DELIMITER ; The create temporary table and the drop table work fine with the readonly flag - if I comment the INSERT line, it runs fine- but whenever I want to insert or delete from that temporary table, I get the error message. I use Mysql 5.1.29-rc. My default storage engine is InnoDB. Thanks in advance, this problem is really driving me crazy.

    Read the article

  • how to check null value of Integer type field in ASP.NET MVC view?

    - by Vikas
    Hi, I have integer type field in database which is having property "Not Null". when i create a view & do a validation, if i left that field blank, it will consider it as 0 so i can not compare it with 0 because if someone insert a value 0 then it will be considered as error! one another problem is that i am using Model error as described in the book "ASP.NET MVC 1.0" @ Scott Gu blog. And I am checking the value in partial class of object (created by LINQ-To-SQL). i.e public partial class Person { public bool IsValid { get { return (GetRuleViolations().Count() == 0); } } public IEnumerable<RuleViolation> GetRuleViolations() { if (String.IsNullOrEmpty(Name)) yield return new RuleViolation("Name is Required", "Name"); if (Age == 0) yield return new RuleViolation("Age is Required", "Age"); yield break; } partial void OnValidate(ChangeAction action) { if (!IsValid) throw new ApplicationException("Rule violations prevent saving"); } } There is also problem with range. Like in database if i declared as smallint i.e. short in c#, now if i exceed that range then it gives error as "A Value is reguired". so finally is there any best way for validation in ASP.NET MVC?

    Read the article

  • Validation problem with a 'JotForm' template

    - by Thomas
    A client sent me a form template they had created using jotform.com to implement on their wordpress site. The form template is supposed to hide part of the form until the user clicks the 'next' button. At which point a script is supposed to validate all of the input fields the user has presumably filled out and then display the rest of the form. While I have successfully managed to get the form to display the next part of the form when the user clicks 'next', it fails to validate the input fields. Its kind of difficult to explain without a huge block of text so it is probably easier to show you: The original working template that the customer sent me: http://www.loftist.com/jotform/List_Your_Loft.html The problem child: http://www.loftist.com/?page_id=78 If you just click on one of the input fields and then click elsewhere on the page, the input fields successfully return a validation error message and prevent the user from clicking on the 'next' button. However, if you simply click on the next button than the next set of fields get displayed. Any thoughts? What am I doing wrong here? Im convinced this must be a really simple problem but Im not sure what it could be....

    Read the article

  • jQuery AJAX chained calls + Celery in Django

    - by user1029968
    Currently clicking one of the links in my application, triggers AJAX call (GET) that - if succeeds - triggers the second one and this second one - if succeeds - calls the third one. This way user can be informed which part of process started when clicking the link is currently ongoing. So in the template file in Django project, click callback body for link mentioned looks like below: $("#the-link").click(function(item)) { // CALL 1 $.ajax({ url: {% url ajax_call_1 %}, data: { // something } }) .done(function(call1Result) { // CALL 2 $.ajax({ url: {% url ajax_call_1 %}, data: { // call1Result passed here to CALL 2 } }) .done(function(call2Result) { // CALL 3 $.ajax({ url: {%url ajax_call_3 %}, data: { // call2Result passed here to CALL 3 } }) .done(function(call3Result) { // expected result if everything went fine console.log("wow, it worked!"); console.log(call3Result); }) .fail(function(errorObject) { console.log("call3 failed"); console.log(errorObject); } }) .fail(function(errorObject)) { console.log("call2 failed"); console.log(errorObject); } }) .fail(function(errorObject) { console.log("call1 failed"); console.log(errorObject); }); }); This works fine for me. The thing is, I'd like to prevent interrupting the following calls if the user closes the browser and the calls are not finished (as it will take some time to finish all three), as there is some additional logic in Django view functions called in each GET request. For example, if user clicks the link and closes the browser during CALL 1, is it possible to somehow go on with the following CALL 2 and CALL 3? I know that normally I'd be able to use Celery Task to process the function but is it still possible here with the chained calls mentioned? Any help is much appreciated!

    Read the article

  • Insert code into a method - Java

    - by DutrowLLC
    Is there a way to automatically insert code into a method? I have the following and I would like to insert the indicated code: public class Person { Set<String> updatedFields = new LinkedHashSet<String>(); String firstName; public String getFirstName(){ return firstName; } boolean isFirstNameChanged = false; // Insert public void setFirstName(String firstName){ if( !isFirstNameChanged ){ // Insert isFirstNameChanged = true; // Insert updatedFields.add("firstName"); // Insert } // Insert this.firstName = firstName; } } I'm also not sure if I can the subset of the method name as a string from inside the method itself as indicated on the line where I add the fieldName as a string into the set of updated fields: updatedFields.add("firstName");. And I'm not sure how to insert fields into a class where I add the boolean field that tracks if the field has been modified or not before (for efficiency to prevent having to manipulate the Set): boolean isFirstNameChanged = false; It seems to most obvious answer to this would be to use code templates inside eclipse, but I'm concerned about having to go back and change the code later.

    Read the article

  • Visual Studio 2010 randomly unable to debug WCF service.

    - by rossisdead
    I'm running Visual Studio 2010 on a Windows 7 x64 machine, and occasionally VS is giving me the good old "The remote procedure could not be debugged.This usually indicates that debugging has not been enabled on the server" error that a lot of people ask about. My problem, though, is that it seems to only do this randomly(it can be anywhere from a few minutes to a few hours), and after I've made plenty of successful calls to the service already. It doesn't prevent the service from working. It still returns values and doesn't throw any errors. The only difference is that annoying dialog pops up everytime I start to debug my application. I should mention that I'm connecting the WCF service from a WPF application. If I launch the web site the service is part of, I don't get the dialog. A few of the things I've tried that do not work: Killing and restarting the server. Compiling the web server in x86 Enabling tracing, but couldn't find any problems. Is this just a bug in Visual Studio 2010, or is there something I'm missing?

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >