Search Results

Search found 6399 results on 256 pages for 'record'.

Page 222/256 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • confirm alert not working ....

    - by user318068
    I want the work of the Control Panel in my site for members ... I designed the page is displayed all the registered members on my site And i but it checkbox front each member name ,becuase it allowed to select multiple name then click delete button The following code works but now i want confirm alert before delete code working even , if clicked on ok the deletion is working , and if clicked on cancel the is not working .. how i can work it ?? while($row = mysql_fetch_array($sql1)) { ?> <tr> <td align="center" bgcolor="#FFFFFF"><input name="checkbox[]" type="checkbox" id="checkbox[]" value="<?php echo $row['MemberID']; ?>"></td> <td bgcolor="#FFFFFF"><?php echo $row['MemberName']; ; ?></td></tr> <?php } ?> <tr> <td colspan="5" align="center" bgcolor="#FFFFFF"><input name="delete" type="submit" id="delete" value="delete" onclick="javascript:return confirm('are you sure to delete this record???? ')"/></td> </tr> <?php $delete = $_REQUEST['delete']; $checkbox = $_REQUEST['checkbox']; $count = count($_REQUEST['checkbox']); // Check if delete button active, start this if($delete){ echo "<script language=\"Javascript\">\n"; for($i=0;$i<$count;$i++){ $del_id = $checkbox[$i]; $sql = "Delete FROM members where MemberID ='$del_id'"; $sq2 = "Delete FROM joinroom where MemberID ='$del_id'"; $result1 = mysql_query($sql); $result2 = mysql_query($sq2); echo "else {\n"; echo "alert ('Nothing deleted');\n }"; echo "</script>"; } } Thanks a lot.........

    Read the article

  • php request variables assigning $_GEt

    - by chris
    if you take a look at a previous question http://stackoverflow.com/questions/2690742/mod-rewrite-title-slugs-and-htaccess I am using the solution that Col. Shrapnel proposed- but when i assign values to $_GET in the actual file and not from a request the code doesnt work. It defaults away from the file as if the $_GET variables are not set The code I have come up with is- if(!empty($_GET['cat'])){ $_GET['target'] = "category"; if(isset($_GET['page'])){ $_GET['pageID'] = $_GET['page']; } $URL_query = "SELECT category_id FROM cats WHERE slug = '".$_GET['cat']."';"; $URL_result = mysql_query($URL_query); $URL_array = mysql_fetch_array($URL_result); $_GET['category_id'] = $URL_array['category_id']; }elseif($_GET['product']){ $_GET['target'] = "product"; $URL_query = "SELECT product_id FROM products WHERE slug = '".$_GET['product']."';"; $URL_result = mysql_query($URL_query); $URL_array = mysql_fetch_array($URL_result); print_r($URL_array); $_GET['product_id'] = $URL_array['product_id']; The original variable string that im trying to represent is /cart.php?Target=product&product_id=16142&category_id=249 And i'm trying to build the query string variables with code and including cart.php so i can use cleaner URL's So I have product/product-title-with-clean-url/ going to slug.php?product=slug Then the slug searches the db for a record with the matching slug and returns the product_id as in the code above.Then built the query string and include cart.php

    Read the article

  • MongoDB complex MapReduce of video logs

    - by Justin Hourigan
    I have a dataset from video streaming logs. Each video is identified by a FileGUID. The log entries record the FileGUID, the fragment of the video watched and the bandwidth it was watched at. I would like to create a mapreduce outputting, for each video, a count for fragments both total and for each bandwidth. Ideally it would look like; {"FileGUID":"50acb3a5796634df0e073285", { "1":{"total":76, "0832":34, "1028":42}, "2":{"total":42, "0832":28, "1028":14}, ... } } Is this possible with one mapreduce or is it a multi-step process, or should I use a different method? Here is a sample of the data. { "_id": ObjectId("50acb3a5796634df0e073285"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:57.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(237), "Status": NumberInt(200), "Size": NumberInt(576790), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073284"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:52.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(236), "Status": NumberInt(200), "Size": NumberInt(577100), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073283"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:47.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(234), "Status": NumberInt(200), "Size": NumberInt(576664), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073282"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:42.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(233), "Status": NumberInt(200), "Size": NumberInt(575692), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" }

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • How do I send floats in window messages.

    - by yngvedh
    Hi, What is the best way to send a float in a windows message using c++ casting operators? The reason I ask is that the approach which first occurred to me did not work. For the record I'm using the standard win32 function to send messages: PostWindowMessage(UINT nMsg, WPARAM wParam, LPARAM lParam) What does not work: Using static_cast<WPARAM>() does not work since WPARAM is typedef'ed to UINT_PTR and will do a numeric conversion from float to int, effectively truncating the value of the float. Using reinterpret_cast<WPARAM>() does not work since it is meant for use with pointers and fails with a compilation error. I can think of two workarounds at the moment: Using reinterpret_cast in conjunction with the address of operator: float f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, *reinterpret_cast<WPARAM*>(&f), 0); Using an union: union { WPARAM wParam, float f }; f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, wParam, 0); Which of these are preffered? Are there any other more elegant way of accomplishing this?

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • Can I get the amount of time for which a key is pressed on a keyboard

    - by Adi
    Dear all, I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.

    Read the article

  • The XPath @root-node-position attribute info

    - by Igor Savinkin
    I couldn't find the @root-node-position XPath attribute info. Would you give me a link of where i can read about it? Is it XPath 2.0? The code (not mine) is ../preceding-sibling::div[1]/div[@root-node-position]/div applied to this HTML: <div class="left"> <div class='prod2'> <div class='name'>Dell Latitude D610-1.73 Laptop Wireless Computer </div>2 GHz Intel Pentium M, 1 GB DDR2 SDRAM, 40 GB </div> <div class='prod1'> <div class='name'>Samsung Chromebook (Wi-Fi, 11.6-Inch) </div>1.7 GHz, 2 GB DDR3 SDRAM, 16 GB </div> </div> <div class="right"> <div class='price2'>$239.95</div> <div class='price1 best'>$249.00</div> </div> Firstly i fetch a price text under class='right' with this query : //DIV[contains(@class,'best')] and then i apply the above mentioned XPath with @root-node-attribute under class='left' to retrieve the rest of the record info.

    Read the article

  • Creating a Mysql view to SELECT coloumns from different tables

    - by user330429
    I need help in constructing a VIEW on 4 tables. The view should contain coloumns: ER.ID, ER.EMPID, ER.CUSTID, ER.STATUS, ER.DATEREPORTED, ER.REPORT, EB.NAME, CR.CUSTNAME, CR.LOCID, CL.LOCNAME, DI.DEPTNAME ALIASES EMP_REPORT ER , EMP_BIO EB, CUST_RECORD CR, CUST_LOC CL, DEPT_ID DI THE DATA MODELS ARE: describe EMP_REPORT; +--------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | empid | int(11) | NO | | NULL | | | custid | int(11) | NO | | NULL | | | status | varchar(32) | NO | | NULL | | | datereported | bigint(20) | NO | | NULL | | | report | text | YES | | NULL | | +--------------+-------------+------+-----+---------+----------------+ describe EMP_BIO; +--------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+-------------+------+-----+---------+-------+ | empid | int(11) | NO | PRI | NULL | | | name | varchar(56) | NO | | NULL | | | sex | char(1) | NO | | NULL | | | deptid | int(11) | NO | | NULL | | | email | varchar(32) | NO | | NULL | | | mobile | bigint(20) | YES | | NULL | | | gtlk | varchar(32) | YES | | NULL | | | skype | varchar(32) | YES | | NULL | | | cvid | int(11) | YES | | NULL | | +--------+-------------+------+-----+---------+-------+ describe CUST_RECORD; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | custid | int(11) | NO | PRI | NULL | auto_increment | | custname | varchar(32) | NO | | NULL | | | address | varchar(255) | YES | | NULL | | | contactp | varchar(32) | YES | | NULL | | | mobile | bigint(20) | YES | | NULL | | | locid | int(11) | NO | | NULL | | | remarks | text | YES | | NULL | | | date | int(11) | YES | | NULL | | | addedby | int(11) | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ describe CUST_LOC; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | locid | int(11) | NO | PRI | 0 | | | locname | varchar(32) | NO | | NULL | | +---------+-------------+------+-----+---------+-------+ describe DEPT_ID; +----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+-------------+------+-----+---------+-------+ | deptid | int(11) | NO | | NULL | | | deptname | varchar(32) | YES | | NULL | | +----------+-------------+------+-----+---------+-------+ The table EMP_REPORT contains reports submitted by employees, all the coloumns in it needs to be fetched. The empid in this table should be used to fetch corresponding name in EMP_BIO (employee biodata) table. The custid in EMP_REPORT should be used to fetch corresponding locid in CUST_RECORD(customer record) which is used to fetch locname in CUST_LOC(customer location) table. The empid in EMP_REPORT is used to fetch corresponding deptid in EMP_BIO table which is then used to fetch corresponding deptname from DEPT_ID(department id) table. I tried constructing view using union of different select queries, but dint get proper results. Please help me. Thanks in advance. PS: sorry for my poor english

    Read the article

  • Why does this asp.net mvc unit test fail?

    - by Brian McCord
    I have this unit test: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Here are the two controller actions that handle deleting: // // GET: /State/Delete/5 public ActionResult Delete(int id) { var x = _stateService.GetById( id ); return View(x); } // // POST: /State/Delete/5 [HttpPost] public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } What I can't figure out is why this test fails. I have verified that the record actually gets deleted from the list. If I set a break point in the Delete method on the line: var x = _stateService.GetById( id ); The GetById does indeed return a null just as it should, but when it gets back to the newresult variable in the test, the ViewData.Model is the deleted model. What am I doing wrong?

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • Seeding many to many tables with Entity Framework

    - by Doozer1979
    I have a meeting entity and a users entity which have a many to many relationship. I'm using Autopoco to create seed data for the Users and meetings How do i seed the UserMeetings linking table that is created by EntityFramework with seed data? The linking table has two fields in it; User_Id, and Meeting_ID. I'm looping through the list of users that autopoco creates and attaching a random number of meetings Here's what i've got so far. foreach (var user in userList) { var rand = new Random(); var amountOfMeetingsToAdd = rand.Next(1, 300); for (var i = 0; i <= amountOfMeetingsToAdd; i++) { var randomMeeting = rand.Next(1, MeetingRecords); //Error occurs on This line user.Meetings.Add(_meetings[randomMeeting]); } } I got an 'Object reference not set to an instance of an object.' even though the meeting record that i'm trying to attach does exist. For info all this is happening prior to me saving the context to the DB.

    Read the article

  • I have created a PHP script and I am lacking to extract the primary key, I have given flow below, pl

    - by Parth
    I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • Where does java.util.logging.Logger store their log

    - by Harry Pham
    This might be a stupid question but I am a bit lost with java Logger private static Logger logger = Logger.getLogger("order.web.OrderManager"); logger.info("Removed order " + id + "."); Where do I see the log? Also this quote from java.util.logging.Logger library: On each logging call the Logger initially performs a cheap check of the request level (e.g. SEVERE or FINE) against the effective log level of the logger. If the request level is lower than the log level, the logging call returns immediately. After passing this initial (cheap) test, the Logger will allocate a LogRecord to describe the logging message. It will then call a Filter (if present) to do a more detailed check on whether the record should be published. If that passes it will then publish the LogRecord to its output Handlers. Does this mean that if I have 3 request level log: logger.log(Level.FINE, "Something"); logger.log(Level.WARNING, "Something"); logger.log(Level.SEVERE, "Something"); And my log level is SEVERE, I can see all three logs, if my log level is WARNING, then I cant see SEVERE log, is that correct? And how do I set the log level?

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Social Media Java Design Problem

    - by jboyd
    I need to put something together quickly that will take blog posts and place them on social media sites, the requirements are as follows: Blog Entries are independent records that already exist, they have a published date and a modified date, the blog entry application cannot be changed, at least not substantially A new blog entry, or update needs to be sent to social media sites I currently do not need to update or delete social media communications if the blog entry is edited, or deleted, though I may need to later My design problems here are as follows: how do I know the status of each update how can I figure out what blog entry updates and postings have already been sent out? how can I quickly poll the blog entry table for postings that haven't yet been sent out? Avoiding looking at each Entry record from the DB as an object and asking if it's been sent already. That would be too slow. I cannot hook into any Blog Entry update code, my only option would be to create a trigger that an update queues something to be processed I'm looking for general guiding principles here, the biggest problem I'm having is coming up with any reasonable way to figure out if a blog entry should be sent to our social media sites in the first place

    Read the article

  • jQuery replacement for javascript confirm

    - by dcp
    Let's say I want to prompt the user before allowing them to save a record. So let's assume I have the following button defined in the markup: <asp:Button ID="btnSave" runat="server" OnClick="btnSave_Click"></asp:Button> To force a prompt with normal javascript, I could wire the OnClick event for my save button to be something like this (I could do this in Page_Load): btnSave.Attributes.Add("onclick", "return confirm('are you sure you want to save?');"); The confirm call will block until the user actually presses on of the Yes/No buttons, which is the behavior I want. For the jquery dialog that is the equivalent, I tried something like this (see below). But the problem is that unlike javascript confirm(), it's going to get all the way through this function (displayYesNoAlert) and then proceed into my btnSave_OnClick method on the C# side. I need a way to make it "block", until the user presses the Yes or No button, and then return true or false so the btnSave_OnClick will be called or not called depending on the user's answer. Currently, I just gave up and went with javascript's confirm, I just wondered if there was a way to do it. function displayYesNoAlert(msg, closeFunction) { dialogResult = false; // create the dialog if it hasn't been instantiated if (!$("#dialog-modal").dialog('isOpen') !== true) { // add a div to the DOM that will store our message $("<div id=\"dialog-modal\" style='text-align: left;' title='Alert!'>").appendTo("body"); $("#dialog-modal").html(msg).dialog({ resizable: true, modal: true, position: [300, 200], buttons: { 'Yes': function () { dialogResult = true; $(this).dialog("close"); }, 'No': function () { dialogResult = false; $(this).dialog("close"); } }, close: function () { if (closeFunction !== undefined) { closeFunction(); } } }); } $("#dialog-modal").html(msg).dialog('open'); }

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • Implementing a bitfield using java enums

    - by soappatrol
    Hello, I maintain a large document archive and I often use bit fields to record the status of my documents during processing or when validating them. My legacy code simply uses static int constants such as: static int DOCUMENT_STATUS_NO_STATE = 0 static int DOCUMENT_STATUS_OK = 1 static int DOCUMENT_STATUS_NO_TIF_FILE = 2 static int DOCUMENT_STATUS_NO_PDF_FILE = 4 This makes it pretty easy to indicate the state a document is in, by setting the appropriate flags. For example: status = DOCUMENT_STATUS_NO_TIF_FILE | DOCUMENT_STATUS_NO_PDF_FILE; Since the approach of using static constants is bad practice and because I would like to improve the code, I was looking to use Enums to achieve the same. There are a few requirements, one of them being the need to save the status into a database as a numeric type. So there is a need to transform the enumeration constants to a numeric value. Below is my first approach and I wonder if this is the correct way to go about this? class DocumentStatus{ public enum StatusFlag { DOCUMENT_STATUS_NOT_DEFINED(1<<0), DOCUMENT_STATUS_OK(1<<1), DOCUMENT_STATUS_MISSING_TID_DIR(1<<2), DOCUMENT_STATUS_MISSING_TIF_FILE(1<<3), DOCUMENT_STATUS_MISSING_PDF_FILE(1<<4), DOCUMENT_STATUS_MISSING_OCR_FILE(1<<5), DOCUMENT_STATUS_PAGE_COUNT_TIF(1<<6), DOCUMENT_STATUS_PAGE_COUNT_PDF(1<<7), DOCUMENT_STATUS_UNAVAILABLE(1<<8), private final long statusFlagValue; StatusFlag(long statusFlagValue) { this.statusFlagValue = statusFlagValue } public long getStatusFlagValue(){ return statusFlagValue } } /** * Translates a numeric status code into a Set of StatusFlag enums * @param numeric statusValue * @return EnumSet representing a documents status */ public EnumSet<StatusFlag> getStatusFlags(long statusValue) { EnumSet statusFlags = EnumSet.noneOf(StatusFlag.class) StatusFlag.each { statusFlag -> long flagValue = statusFlag.statusFlagValue if ( (flagValue&statusValue ) == flagValue ) { statusFlags.add(statusFlag) } } return statusFlags } /** * Translates a set of StatusFlag enums into a numeric status code * @param Set if statusFlags * @return numeric representation of the document status */ public long getStatusValue(Set<StatusFlag> flags) { long value=0 flags.each { statusFlag -> value|=statusFlag.getStatusFlagValue() } return value } public static void main(String[] args) { DocumentStatus ds = new DocumentStatus(); Set statusFlags = EnumSet.of( StatusFlag.DOCUMENT_STATUS_OK, StatusFlag.DOCUMENT_STATUS_UNAVAILABLE) assert ds.getStatusValue( statusFlags )==258 // 0000.0001|0000.0010 long numericStatusCode = 56 statusFlags = ds.getStatusFlags(numericStatusCode) assert !statusFlags.contains(StatusFlag.DOCUMENT_STATUS_OK) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_TIF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_PDF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_OCR_FILE) } }

    Read the article

  • Entity Framework DateTime update extremely slow

    - by Phyxion
    I have this situation currently with Entity Framework: using (TestEntities dataContext = DataContext) { UserSession session = dataContext.UserSessions.FirstOrDefault(userSession => userSession.Id == SessionId); if (session != null) { session.LastAvailableDate = DateTime.Now; dataContext.SaveChanges(); } } This is all working perfect, except for the fact that it is terribly slow compared to what I expect (14 calls per second, tested with 100 iterations). When I update this record manually through this command: dataContext.Database.ExecuteSqlCommand(String.Format("update UserSession set LastAvailableDate = '{0}' where Id = '{1}'", DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fffffff"), SessionId)); I get 55 calls per second, which is more than fast enough. However, when I don't update the session.LastAvailableDate but I update an integer (e.g. session.UserId) or string with Entity Framework, I get 50 calls per second, which is also more than fast enough. Only the datetime field is terrible slow. The difference of a factor 4 is unacceptable and I was wondering how I can improve this as I don't prefer using direct SQL when I can also use the Entity Framework. I'm using Entity Framework 4.3.1 (also tried 4.1).

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • Prroblem with ObjectDelete() in Entity Framework 4

    - by Tom
    I got two entities: public class User : Entity { public virtual string Username { get; set; } public virtual string Email { get; set; } public virtual string PasswordHash { get; set; } public virtual List<Photo> Photos { get; set; } } and public class Photo : Entity { public virtual string FileName { get; set; } public virtual string Thumbnail { get; set; } public virtual string Description { get; set; } public virtual int UserId { get; set; } public virtual User User { get; set; } } When I try to delete the photo it works fine for the DB (record gets romoved from the table) but it doesnt for the Photos collection in the User entity (I can still see that photo in user.Photos). This is my code. What I'm doing wrong here? Changing entity properties works fine. When I change photo FileName for example it gets updated in DB and in user.Photos. var photo = SelectById(id); Context.DeleteObject(photo); Context.SaveChanges(); var user = GetUser(userName); // the photo I have just deleted is still in user.Photos also tried this but getting same results: var photo = user.Photos.First(); Context.DeleteObject(photo); Context.SaveChanges();

    Read the article

  • rails 4 -- working with js format from ajax

    - by user101289
    I'm still working on learning Rails, and I have a page with team information that will get updated based on a team's icon click, which fires an ajax call to the controller to populate some tabs. I've read some good info about how to use format.js in the controller to render a partial from a js.coffee or js.erb file. The problem I'm running into is in the coffeescript I think. Right now, I'm getting some data called @schedules from the controller, and passing it to a schedule.js.coffee file that should populate a partial for each record returned and attach it to a table. // schedule.js.coffee $.each @schedules, (schedule) -> ($ '#schedule_data').append("<%= j render(partial: 'schedules/schedule', locals: { s: schedule }) %>") This throws an error `> undefined local variable or method `schedule' for #<#<Class:0x007fe535cd2900>:0x007fe535d32a30>` I tried simplifying the coffeescript to just log the output: $.each @schedules, (schedule) -> console.log(schedule) but this prints nothing. Am I missing something? I am very inexperienced with coffeescript, but it seems like I should be getting some data-- I verified that the schedule items do exist for this team item.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >