Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 231/266 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • SQL trigger to delete rows from database

    - by wpearse
    I have an industrial system that logs alarms to a remotely hosted MySQL database. The industrial system inserts a new row whenever a property of the alarm changes (such as the time the alarm was activated, acknowledged or switched off) into a table named 'alarms'. I don't want multiple records for each alarm, so I have set up two database triggers. The first trigger mirrors each new record to a second table, creating/updating rows as required: CREATE TRIGGER `mirror_alarms` BEFORE INSERT ON `alarms` FOR EACH ROW INSERT INTO `alarm_display` (Tag,...,OffTime) VALUES (new.Tag,...,new.OffTime) ON DUPLICATE KEY UPDATE OnDate=new.OnDate,...,OffTime=new.OffTime The second trigger should execute after the first and (ideally) delete all rows from the alarms table. (I used the Tag property of the alarm because the Tag property never changes, although I suspect I could just use a 'DELETE FROM alarms WHERE 1' statement to the same effect). CREATE TRIGGER `remove_alarms` AFTER INSERT ON `alarms` FOR EACH ROW DELETE FROM alarms WHERE Tag=new.Tag My problem is that the second trigger doesn't appear to run, or if it does, the second trigger doesn't delete any rows from the database. So here's the question: why does my second trigger not do what I expect it to do?

    Read the article

  • Jquery Find an XML element based on the value of one of it's children

    - by NateD
    I'm working on a simple XML phonebook app to learn JQuery, and I can't figure out how to do something like this: When the user enters the first name of a contact in a textbox I want to find the entire record of that person. The XML looks like this: <phonebook> <person> <number> 555-5555</number> <first_name>Evelyn</first_name> <last_name>Remington</last_name> <address>Edge of the Abyss</address>  </person> <person> <number>+34 1 6444 333 2223230</number> <first_name>Max</first_name> <last_name>Muscle</last_name> <address>Mining Belt</address>  </person> </phonebook> and the best I've been able to do with the jQuery is something like this: var myXML; function searchXML(){ $.ajax({ type:"GET", url: "phonebook.xml", dataType: "xml", success: function(xml){myXML = $("xml").find("#firstNameBox").val())} }); } What I want it to do is return the entire <person> element so I can iterate through and display all that person's information. Any help would be appreciated.

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • Summarising grouped records in a dataframe in R

    - by monch1962
    Hello all, I have a data frame in R that looks like this: > TimeOffset, Source, Length > 0 1 1500 > 0.1 1 1000 > 0.2 1 50 > 0.4 2 25 > 0.6 2 3 > 1.1 1 1500 > 1.4 1 18 > 1.6 2 2500 > 1.9 2 18 > 2.1 1 37 > ... and I want to convert it to > TimeOffset, Source, Length > 0.2 1 2550 > 0.6 2 28 > 1.4 1 1518 > 1.9 2 2518 > ... Trying to put this into English, I want to group consecutive records with the same 'Source' together, then printing out a single record per group showing the highest time offset in that group, the source, and the sum of the lengths in that group. The TimeOffset values will always increase. I suspect this is possible in R, but I really don't know where to start. In a pinch I could export the data frame out and do it in e.g. Python, but I'd prefer to stay within R if possible. Thanks in advance for any assistance you can provide

    Read the article

  • How do I serialise a graph in Java without getting StackOverflowException?

    - by Tim Cooper
    I have a graph structure in java, ("graph" as in "edges and nodes") and I'm attempting to serialise it. However, I get "StackOverflowException", despite significantly increasing the JVM stack size. I did some googling, and apparently this is a well known limitation of java serialisation: that it doesn't work for deeply nested object graphs such as long linked lists - it uses a stack record for each link in the chain, and it doesn't do anything clever such as a breadth-first traversal, and therefore you very quickly get a stack overflow. The recommended solution is to customise the serialisation code by overriding readObject() and writeObject(), however this seems a little complex to me. (It may or may not be relevant, but I'm storing a bunch of fields on each edge in the graph so I have a class JuNode which contains a member ArrayList<JuEdge> links;, i.e. there are 2 classes involved, rather than plain object references from one node to another. It shouldn't matter for the purposes of the question). My question is threefold: (a) why don't the implementors of Java rectify this limitation or are they already working on it? (I can't believe I'm the first person to ever want to serialise a graph in java) (b) is there a better way? Is there some drop-in alternative to the default serialisation classes that does it in a cleverer way? (c) if my best option is to get my hands dirty with low-level code, does someone have an example of graph serialisation java source-code that can use to learn how to do it?

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • OO model for nsxmlparser when delegate is not self

    - by richard
    Hi, I am struggling with the correct design for the delegates of nsxmlparser. In order to build my table of Foos, I need to make two types of webservice calls; one for the whole table and one for each row. It's essentially a master-query then detail-query, except the master-query-result-xml doesn't return enough information so i then need to query the detail for each row. I'm not dealing with enormous amounts of data. Anyway - previously I've just used NSXMLParser *parser = [[NSXMLParser alloc]init]; [parser setDelegate:self]; [parser parse]; and implemented all the appropriate delegate methods in whatever class i'm in. In attempt at cleanliness, I've now created two separate delegate classes and done something like: NSXMLParser *xp = [[NSXMLParser alloc]init]; MyMasterXMLParserDelegate *masterParserDelegate = [[MyMasterXMLParser]alloc]init]; [xp setDelegate:masterParserDelegate]; [xp parse]; In addition to being cleaner (in my opinion, at least), it also means each of the -parser:didStartElement implementations don't spend most of the time trying to figure out which xml they're parsing. So now the real crux of the problem. Before i split out the delegates, i had in the main class that was also implementing the delegate methods, a class-level NSMutableArray that I would just put my objects-created-from-xml in when -parser:didEndElement found the 'end' of each record. Now the delegates are in separate classes, I can't figure out how to have the -parser:didEndElement in the 'detail' delegate class "return" the created object to the calling class. At least, not in a clean OO way. I'm sure i could do it with all sorts of nasty class methods. Does the question make sense? Thanks.

    Read the article

  • Summarising grouped records in a dataframe in R (...again)

    - by monch1962
    Hello all, (I tried to ask this question earlier today, but later realised I over-simplified the question; the answers I received were correct, but I couldn't use them because of my over-simplification of the problem in the original question. Here's my 2nd attempt...) I have a data frame in R that looks like: "Timestamp", "Source", "Target", "Length", "Content" 0.1 , P1 , P2 , 5 , "ABCDE" 0.2 , P1 , P2 , 3 , "HIJ" 0.4 , P1 , P2 , 4 , "PQRS" 0.5 , P2 , P1 , 2 , "ZY" 0.9 , P2 , P1 , 4 , "SRQP" 1.1 , P1 , P2 , 1 , "B" 1.6 , P1 , P2 , 3 , "DEF" 2.0 , P2 , P1 , 3 , "IJK" ... and I want to convert this to: "StartTime", "EndTime", "Duration", "Source", "Target", "Length", "Content" 0.1 , 0.4 , 0.3 , P1 , P2 , 12 , "ABCDEHIJPQRS" 0.5 , 0.9 , 0.4 , P2 , P1 , 6 , "ZYSRQP" 1.1 , 1.6 , 0.5 , P1 , P2 , 4 , "BDEF" ... Trying to put this into English, I want to group consecutive records with the same 'Source' and 'Target' together, then print out a single record per group showing the StartTime, EndTime & Duration (=EndTime-StartTime) for that group, along with the sum of the Lengths for that group, and a concatenation of the Content (which will all be strings) in that group. The TimeOffset values will always increase throughout the data frame. I had a look at melt/recast and have a feeling that it could be used to solve the problem, but couldn't get my head around the documentation. I suspect it's possible to do this within R, but I really don't know where to start. In a pinch I could export the data frame out and do it in e.g. Python, but I'd prefer to stay within R if possible. Thanks in advance for any assistance you can provide

    Read the article

  • MongoDB complex MapReduce of video logs

    - by Justin Hourigan
    I have a dataset from video streaming logs. Each video is identified by a FileGUID. The log entries record the FileGUID, the fragment of the video watched and the bandwidth it was watched at. I would like to create a mapreduce outputting, for each video, a count for fragments both total and for each bandwidth. Ideally it would look like; {"FileGUID":"50acb3a5796634df0e073285", { "1":{"total":76, "0832":34, "1028":42}, "2":{"total":42, "0832":28, "1028":14}, ... } } Is this possible with one mapreduce or is it a multi-step process, or should I use a different method? Here is a sample of the data. { "_id": ObjectId("50acb3a5796634df0e073285"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:57.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(237), "Status": NumberInt(200), "Size": NumberInt(576790), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073284"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:52.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(236), "Status": NumberInt(200), "Size": NumberInt(577100), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073283"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:47.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(234), "Status": NumberInt(200), "Size": NumberInt(576664), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073282"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:42.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(233), "Status": NumberInt(200), "Size": NumberInt(575692), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" }

    Read the article

  • Can I get the amount of time for which a key is pressed on a keyboard

    - by Adi
    Dear all, I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.

    Read the article

  • Entity Framework DateTime update extremely slow

    - by Phyxion
    I have this situation currently with Entity Framework: using (TestEntities dataContext = DataContext) { UserSession session = dataContext.UserSessions.FirstOrDefault(userSession => userSession.Id == SessionId); if (session != null) { session.LastAvailableDate = DateTime.Now; dataContext.SaveChanges(); } } This is all working perfect, except for the fact that it is terribly slow compared to what I expect (14 calls per second, tested with 100 iterations). When I update this record manually through this command: dataContext.Database.ExecuteSqlCommand(String.Format("update UserSession set LastAvailableDate = '{0}' where Id = '{1}'", DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fffffff"), SessionId)); I get 55 calls per second, which is more than fast enough. However, when I don't update the session.LastAvailableDate but I update an integer (e.g. session.UserId) or string with Entity Framework, I get 50 calls per second, which is also more than fast enough. Only the datetime field is terrible slow. The difference of a factor 4 is unacceptable and I was wondering how I can improve this as I don't prefer using direct SQL when I can also use the Entity Framework. I'm using Entity Framework 4.3.1 (also tried 4.1).

    Read the article

  • How to create a platform ontop of CSLA? <-- if in case it make sense

    - by Peejay
    Hi all! here is the senario, i'm developing an application using csla 3.8 / c#.net, the application will have different modules. its like an ERP, it will have accounting, daily time record, recruitment etc as modules. Now the requirement is to check for common entities per module and build a "platform" (<- the boss calls it that way) from it. for example, DTR will have an entity "employee", Recruitment will have "Applicant" so one common entity that you can derive from both that can be put in the platform is "Person". "Person" will contain typical info like name, address, contact info etc. I know it sounds like OOP 101. the thing is, i dont know how i am to go about it. how i wish it was just a simple inheritance but the requirement is like to create an API of some sort to be used by the modules using CSLA. in csla you create smart objects right, inheriting from the base classes of csla like businessListbase, readonlylistbase etc. right? what if for example i created a businessbase Applicant class, it will have properties like salary demand, availability date etc. now for the personal info i will need the "Person" from the "platform" and implement it to the applicant class. so in summary i have several questions: how to create such platform? if such platform is possible, how will it be implemented on each module's entities? (im already inheriting from base clases of csla) if incase 1 and 2 are possible, does it have advantages on development and maintenace of the app? the reason why i'm asking #3 is because the way i see it, even if i am able to create a platform for that, i will be needing to define properties of the platform entity on my module entities so to have validation and all. im sorry if i'm typing nonesense i'm really confused. hope someone could enlighten me. thank you all!

    Read the article

  • Implementing a bitfield using java enums

    - by soappatrol
    Hello, I maintain a large document archive and I often use bit fields to record the status of my documents during processing or when validating them. My legacy code simply uses static int constants such as: static int DOCUMENT_STATUS_NO_STATE = 0 static int DOCUMENT_STATUS_OK = 1 static int DOCUMENT_STATUS_NO_TIF_FILE = 2 static int DOCUMENT_STATUS_NO_PDF_FILE = 4 This makes it pretty easy to indicate the state a document is in, by setting the appropriate flags. For example: status = DOCUMENT_STATUS_NO_TIF_FILE | DOCUMENT_STATUS_NO_PDF_FILE; Since the approach of using static constants is bad practice and because I would like to improve the code, I was looking to use Enums to achieve the same. There are a few requirements, one of them being the need to save the status into a database as a numeric type. So there is a need to transform the enumeration constants to a numeric value. Below is my first approach and I wonder if this is the correct way to go about this? class DocumentStatus{ public enum StatusFlag { DOCUMENT_STATUS_NOT_DEFINED(1<<0), DOCUMENT_STATUS_OK(1<<1), DOCUMENT_STATUS_MISSING_TID_DIR(1<<2), DOCUMENT_STATUS_MISSING_TIF_FILE(1<<3), DOCUMENT_STATUS_MISSING_PDF_FILE(1<<4), DOCUMENT_STATUS_MISSING_OCR_FILE(1<<5), DOCUMENT_STATUS_PAGE_COUNT_TIF(1<<6), DOCUMENT_STATUS_PAGE_COUNT_PDF(1<<7), DOCUMENT_STATUS_UNAVAILABLE(1<<8), private final long statusFlagValue; StatusFlag(long statusFlagValue) { this.statusFlagValue = statusFlagValue } public long getStatusFlagValue(){ return statusFlagValue } } /** * Translates a numeric status code into a Set of StatusFlag enums * @param numeric statusValue * @return EnumSet representing a documents status */ public EnumSet<StatusFlag> getStatusFlags(long statusValue) { EnumSet statusFlags = EnumSet.noneOf(StatusFlag.class) StatusFlag.each { statusFlag -> long flagValue = statusFlag.statusFlagValue if ( (flagValue&statusValue ) == flagValue ) { statusFlags.add(statusFlag) } } return statusFlags } /** * Translates a set of StatusFlag enums into a numeric status code * @param Set if statusFlags * @return numeric representation of the document status */ public long getStatusValue(Set<StatusFlag> flags) { long value=0 flags.each { statusFlag -> value|=statusFlag.getStatusFlagValue() } return value } public static void main(String[] args) { DocumentStatus ds = new DocumentStatus(); Set statusFlags = EnumSet.of( StatusFlag.DOCUMENT_STATUS_OK, StatusFlag.DOCUMENT_STATUS_UNAVAILABLE) assert ds.getStatusValue( statusFlags )==258 // 0000.0001|0000.0010 long numericStatusCode = 56 statusFlags = ds.getStatusFlags(numericStatusCode) assert !statusFlags.contains(StatusFlag.DOCUMENT_STATUS_OK) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_TIF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_PDF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_OCR_FILE) } }

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Why does this asp.net mvc unit test fail?

    - by Brian McCord
    I have this unit test: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Here are the two controller actions that handle deleting: // // GET: /State/Delete/5 public ActionResult Delete(int id) { var x = _stateService.GetById( id ); return View(x); } // // POST: /State/Delete/5 [HttpPost] public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } What I can't figure out is why this test fails. I have verified that the record actually gets deleted from the list. If I set a break point in the Delete method on the line: var x = _stateService.GetById( id ); The GetById does indeed return a null just as it should, but when it gets back to the newresult variable in the test, the ViewData.Model is the deleted model. What am I doing wrong?

    Read the article

  • Ideas in implenting the following entry form in ASP.NET MVC 2

    - by KiD0M4N
    Hi guys, I have a very simple data entry form to implement. It looks like this: Obviously I have mocked out the actual requirements but the essence is similar. Entering a name and clicking history should bring up a pop up pointing to the url '/student/viewhistory/{name}' Name and age are required fields The sub form (in the mockup) with Class (a drop down, containing the numbers 1 - 10) and Subject (containing A - D, say) form a unique pair of values for which a score is required. So, selecting the Class and Subject, entering a score and clicking on Add should 'add' this record for the student. Then the user should be able to click Save (to persist the entry to the database) or be able to add further (class, subject, score) pairs to the entry. Any ideas how to smartly implement this? (I am coming from a DWH field... so thinking in a stateless manner is slightly foreign to me.) Any help is appreciated. I would imagine a smart use of jQuery would give a easy solution. Regards, Karan

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • Where does java.util.logging.Logger store their log

    - by Harry Pham
    This might be a stupid question but I am a bit lost with java Logger private static Logger logger = Logger.getLogger("order.web.OrderManager"); logger.info("Removed order " + id + "."); Where do I see the log? Also this quote from java.util.logging.Logger library: On each logging call the Logger initially performs a cheap check of the request level (e.g. SEVERE or FINE) against the effective log level of the logger. If the request level is lower than the log level, the logging call returns immediately. After passing this initial (cheap) test, the Logger will allocate a LogRecord to describe the logging message. It will then call a Filter (if present) to do a more detailed check on whether the record should be published. If that passes it will then publish the LogRecord to its output Handlers. Does this mean that if I have 3 request level log: logger.log(Level.FINE, "Something"); logger.log(Level.WARNING, "Something"); logger.log(Level.SEVERE, "Something"); And my log level is SEVERE, I can see all three logs, if my log level is WARNING, then I cant see SEVERE log, is that correct? And how do I set the log level?

    Read the article

  • How do I send floats in window messages.

    - by yngvedh
    Hi, What is the best way to send a float in a windows message using c++ casting operators? The reason I ask is that the approach which first occurred to me did not work. For the record I'm using the standard win32 function to send messages: PostWindowMessage(UINT nMsg, WPARAM wParam, LPARAM lParam) What does not work: Using static_cast<WPARAM>() does not work since WPARAM is typedef'ed to UINT_PTR and will do a numeric conversion from float to int, effectively truncating the value of the float. Using reinterpret_cast<WPARAM>() does not work since it is meant for use with pointers and fails with a compilation error. I can think of two workarounds at the moment: Using reinterpret_cast in conjunction with the address of operator: float f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, *reinterpret_cast<WPARAM*>(&f), 0); Using an union: union { WPARAM wParam, float f }; f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, wParam, 0); Which of these are preffered? Are there any other more elegant way of accomplishing this?

    Read the article

  • Projects integration question

    - by qkrsppopcmpt
    Other team has one legacy system, which is data aggregators. It is implemented as web service using JAVA, SOAP,MTOM,Tomcat and Axis2. They have wsdl files defining functionalities such as search, retrieve data, upload, download. Our team has a new developed website which is developed using RoR with mySQL. It is sort of social networking. Users can register, add friends, upload images, videos. Also, they can search data. We are required to connect the two systems. One possible solution I think is - Adding components into our website. The component invoke services on the aggregators. - Synchronize website database to the aggregators. My doubts are: 1. How to add components in our websites? The components should use Java or Ruby or adapter from java to ruby. It is possible using ruby invoke web service. I think it should work since it is the point of web service. If so, can ruby call those services in wsdl directly? But how to deal with those different data structure? How to synchroinze our database to the aggregators. I think the best way is also through web service invocation such as upload. That means, we have to export the db records into xml files and then write some tools to upload. The web service project support MTOM. So, it is fine to upload huge data. Am I on the right record? Can anybody give me some hint. Thanks.

    Read the article

  • jQuery replacement for javascript confirm

    - by dcp
    Let's say I want to prompt the user before allowing them to save a record. So let's assume I have the following button defined in the markup: <asp:Button ID="btnSave" runat="server" OnClick="btnSave_Click"></asp:Button> To force a prompt with normal javascript, I could wire the OnClick event for my save button to be something like this (I could do this in Page_Load): btnSave.Attributes.Add("onclick", "return confirm('are you sure you want to save?');"); The confirm call will block until the user actually presses on of the Yes/No buttons, which is the behavior I want. For the jquery dialog that is the equivalent, I tried something like this (see below). But the problem is that unlike javascript confirm(), it's going to get all the way through this function (displayYesNoAlert) and then proceed into my btnSave_OnClick method on the C# side. I need a way to make it "block", until the user presses the Yes or No button, and then return true or false so the btnSave_OnClick will be called or not called depending on the user's answer. Currently, I just gave up and went with javascript's confirm, I just wondered if there was a way to do it. function displayYesNoAlert(msg, closeFunction) { dialogResult = false; // create the dialog if it hasn't been instantiated if (!$("#dialog-modal").dialog('isOpen') !== true) { // add a div to the DOM that will store our message $("<div id=\"dialog-modal\" style='text-align: left;' title='Alert!'>").appendTo("body"); $("#dialog-modal").html(msg).dialog({ resizable: true, modal: true, position: [300, 200], buttons: { 'Yes': function () { dialogResult = true; $(this).dialog("close"); }, 'No': function () { dialogResult = false; $(this).dialog("close"); } }, close: function () { if (closeFunction !== undefined) { closeFunction(); } } }); } $("#dialog-modal").html(msg).dialog('open'); }

    Read the article

  • Maven + SSDM Build and Runtime Environment Automation

    - by Randy
    Preface: My Company, like most, has several run-time environments and several release versions which themselves are composed of different versions of various jars. For example, let us consider release versions 1.1, 1.2, and 1.3 of Software X, which may be deployed to a developer computer, testing, or production. Software-x-1.1 is itself composed of jarA-0.9.1 and jarB-0.7.5, but software-x-1.3 is composed of jarA-1.7.31 and jarB-0.8.1. Currently we use Spring's PropertyPlaceholderConfigurer to configure run-time variables (such as database credentials), however, properties also change with release versions. We also use Maven 2 POM version 4 to specify which versions of our code need to be used. We place the version numbers of our jars as properties within profiles (dev,test,prod) inside of the parent pom and then reference those version numbers in all project poms. As of right now, we have no way to specify which project versions pertain to a given release other than the most current one. Moreover, we deploy our run-time configurations to the SSDM pickup which then configures and creates the services defined by the built versions of our software. -- Questions: Is there any procedure/tool we can use to build our product by merely providing the run-time environment and version number? IE "build 1.1 dev"? Is there anyway we can store the required jar versions for each release build? We are currently versioning all files, including the parent pom, but merely versioning the parent pom does not record which release version is pertinent to that parent pom. What else can we do to further automate the process of builds? For example, if we could manage run-time configurations within the parent pom that would be a step in the right direction, but that seems like a violation of scope. Any tool outside of our framework is inconceivable at this point, but not in the far future. Summary: How can we automate our build process to the fullest extent without being error prone?

    Read the article

  • php request variables assigning $_GEt

    - by chris
    if you take a look at a previous question http://stackoverflow.com/questions/2690742/mod-rewrite-title-slugs-and-htaccess I am using the solution that Col. Shrapnel proposed- but when i assign values to $_GET in the actual file and not from a request the code doesnt work. It defaults away from the file as if the $_GET variables are not set The code I have come up with is- if(!empty($_GET['cat'])){ $_GET['target'] = "category"; if(isset($_GET['page'])){ $_GET['pageID'] = $_GET['page']; } $URL_query = "SELECT category_id FROM cats WHERE slug = '".$_GET['cat']."';"; $URL_result = mysql_query($URL_query); $URL_array = mysql_fetch_array($URL_result); $_GET['category_id'] = $URL_array['category_id']; }elseif($_GET['product']){ $_GET['target'] = "product"; $URL_query = "SELECT product_id FROM products WHERE slug = '".$_GET['product']."';"; $URL_result = mysql_query($URL_query); $URL_array = mysql_fetch_array($URL_result); print_r($URL_array); $_GET['product_id'] = $URL_array['product_id']; The original variable string that im trying to represent is /cart.php?Target=product&product_id=16142&category_id=249 And i'm trying to build the query string variables with code and including cart.php so i can use cleaner URL's So I have product/product-title-with-clean-url/ going to slug.php?product=slug Then the slug searches the db for a record with the matching slug and returns the product_id as in the code above.Then built the query string and include cart.php

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >