Search Results

Search found 17782 results on 712 pages for 'questions and answers'.

Page 602/712 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • Possible to Inspect Innards of Core C# Functionality

    - by Nick Babcock
    I was struck today, with the inclination to compare the innards of Buffer.BlockCopy and Array.CopyTo. I am curious to see if Array.CopyTo called Buffer.BlockCopy behind the scenes. There is no practical purpose behind this, I just want to further my understanding of the C# language and how it is implemented. Don't jump the gun and accuse me of micro-optimization, but you can accuse me of being curious! When I ran ILasm on mscorlib.dll I received this for Array.CopyTo .method public hidebysig newslot virtual final instance void CopyTo(class System.Array 'array', int32 index) cil managed { // Code size 0 (0x0) } // end of method Array::CopyTo and this for Buffer.BlockCopy .method public hidebysig static void BlockCopy(class System.Array src, int32 srcOffset, class System.Array dst, int32 dstOffset, int32 count) cil managed internalcall { .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) } // end of method Buffer::BlockCopy Which, frankly, baffles me. I've never run ILasm on a dll/exe I didn't create. Does this mean that I won't be able to see how these functions are implemented? Searching around only revealed a stackoverflow question, which Marc Gravell said [Buffer.BlockCopy] is basically a wrapper over a raw mem-copy While insightful, it doesn't answer my question if Array.CopyTo calls Buffer.BlockCopy. I'm specifically interested in if I'm able to see how these two functions are implemented, and if I had future questions about the internals of C#, if it is possible for me to investigate it. Or am I out of luck?

    Read the article

  • Create a dynamic project template in VS 2010?

    - by jonhobbs
    This might sound a bit of an odd question but I know what I want to achieve, just don't know if it's possible. Firstly, I'd like to be able to create a visual studio project that the 2 developers that work with me can use as a basis for all new websites. I want to drop all the common files that we use in there, like jQuery, CMS files etc. so that every time they start a new project they don't have to worry about all of that stuff. I guess to do this I just set up a project and "File Export Template" ? Now, here's the tricky bit... When you open up one of the default templates in VS it asks you a few questions, such as if you want to use a master page or if you want to use code behind etc. What I would like to do is set up something similar so that when you use the project template it asks you what version of jQuery you want to use so that it can import the right file, or for example it might ask you if you want to include certain user controls that the CMS contains. If you tick the box then the folder with the necessary user controls would be put in your new project for you. I know MS can do this but can a user like me include functionality like that in my own project template? Hope that makes sense.

    Read the article

  • svcutil, XmlSerializer and xsd:list

    - by Dmitry Ornatsky
    I'm using svcutil to generate classes from service metadata. This XML schema <xsd:complexType name="FindRequest"> ... <xsd:attribute name="Significance" type="Significance" use="optional" /> </xsd:complexType> <xsd:simpleType name="Significance"> <xsd:list> <xsd:simpleType> <xsd:restriction base="xsd:int"> <xsd:enumeration value="1" /> <xsd:enumeration value="2" /> <xsd:enumeration value="3" /> </xsd:restriction> </xsd:simpleType> </xsd:list> produces following code: public partial class FindRequest { ... private int significanceField; private bool significanceFieldSpecified; [System.Xml.Serialization.XmlAttributeAttribute()] public int Significance { get { return this.significanceField; } set { this.significanceField = value; } } [System.Xml.Serialization.XmlIgnoreAttribute()] public bool SignificanceSpecified { get { return this.significanceFieldSpecified; } set { this.significanceFieldSpecified = value; } } } My questions are: Is it possible to make XmlSerializer understand this type of list: <FindRequest Significance="1 2 3"/> For example by using some kind of a flags-style enum: public enum EmployeeStatus { [XmlEnum(Name = "1")] One = 1, [XmlEnum(Name = "2")] Two = 2, [XmlEnum(Name = "3")] Three = 4 } If the answer is yes, Is it possible to make svcutil/xsd.exe generate classes that are serialized that way without changing the schema?

    Read the article

  • WCF Service instead of ASMX Web Service?

    - by wchrisjohnson
    I'm writing a SOAP Server that will act as an endpoint for an external client. The external client expects SOAP 1.1. I'll be taking embedded business objects in the SOAP messages and passing them to an internal application, getting responses back and responding with SOAP messages to the eternal client. I did the traditional ASMX based web services several years ago. Now, I've been exploring WCF Services and wondering the best approach to take. 1) Should WCF be considered a superset of ASMX web services? 2) Is there any reason to still write new web services using ASMX instead of WCF? 3) Does WCF provide better facilities for working with SOAP messages, as opposed to SOAP Extensions? 4) Can I restrict communication to SOAP 1.1 using WCF, the way I can with a web.config change in ASMX? 5) Does WCF have an easy way to log or review the requests that hit the service without resorting to something like SOAP extensions? Sorry my questions are not very specific; still trying to get handle on what I need to know... Using VS2008, Windows Server 2008. Chris

    Read the article

  • python NameError: name '<anything>' is not defined (but it is!)

    - by BenjaminGolder
    Note: Solved. It turned out that I was importing a previous version of the same module. It is easy to find similar topics on StackOverflow, where someone ran into a NameError. But most of the questions deal with specific modules and the solution is often to update the module. In my case, I am trying to import a function from a module that I wrote myself. The module is named InfraPy, and it is definitely on sys.path. One particular function (called listToText) in InfraPy returns a NameError, but only when I try to import it into another script. Inside InfraPy, under if __name__=='__main__':, the listToText function works just fine. From InfraPy I can import other functions with no problems. Including from InfraPy import * in my script does not return any errors until I try to use the listToText function. How can this occur? How can importing one particular function return a NameError, while importing all the other functions in the same module works fine? Using python 2.6 on MacOSX 10.6, also encountered the same error running the script on Windows 7, using IronPython 2.6 for .NET 4.0 Thanks. If there are other details you think would be helpful in solving this, I'd be happy to provide them. As requested, here is the function definition inside of InfraPy: def listToText(inputList, folder=None, outputName='list.txt'): ''' Creates a text file from a list (with each list item on a separate line). May be placed in any given folder, but will otherwise be created in the working directory of the python interpreter. ''' fname = outputName if folder != None: fname = folder+'/'+fname f = open(fname, 'w') for file in inputList: f.write(file+'\n') f.close() This function is defined above and outside of if __name__=='__main__': I've tried moving InfraPy around in relation to the script. The most baffling situation is that when InfraPy is in the same folder as the script, and I import using from InfraPy import listToText, I receive this error: NameError: name listToText is not defined. Again, the other functions import fine, they are all defined outside of if __name__=='__main__': in InfraPy.

    Read the article

  • Problem with EDM in ASP.NET MVC

    - by Mannsi
    Hello, I have a question that is pretty similar to this question: http://stackoverflow.com/questions/899734/strongly-typed-asp-net-mvc-with-entity-framework but the solutions for that question don't work for me. Let me start by saying that I don't know a great deal about the subject I am asking about. I have the following code [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(PaymentInformation paymentInformationToEdit, int pensionFundID) { var originalPaymentInformation = (from PIs in _db.PaymentInformation where PIs.employeeNumber == paymentInformation.employeeNumber select PIs).First(); var laborUnion = (from LUs in _db.LaborUnion where LUs.laborUnionID = laborUnionID select LUs)First(); paymentInformationToEdit.laborUnion = laborUnion; _db.ApplyProperyChanges(originalPaymentInformation.EntityKey.EntitySetName, paymentInformationToEdit); _db.SaveChanges(); } I get an error when I try for the ApplyProperyChanges saying 'The existing object in the ObjectContext is in the Added state. Changes can only be applied when the existing object is in an unchanged or modified state'. I don't know how to change the state to either, or even if I am doing something fundamentally wrong. Please advice. EDIT: I hope this is the way to go here on stackoverflow. I haven't gotten an answer that solved my problem but Gregoire below posted a possible solution that I didn't understand. I hope this edit bumps my question so somebody will see it and help me. Sorry if this is not the way to go.

    Read the article

  • Autorotation and multiple view controllers

    - by alku83
    I'm creating an iPad application which should only work in portrait and portait upsidedown modes. For performance reasons in my applicationDidFinishLaunching method I'm creating several viewControllers, and adding them to my main window as subviews. I then hide the ones I don't want to see straight away. There is no tab bar or navigation controller. My problem is that only the first viewController seems to be receiving the rotate calls. I have verified this by swapping around the order in which I add the subviews to the main window and NSLog's. Is there some way I can force all the controllers to receive the calls? Some of my views are designed to lay over the top of another view, but this behind view will not always be the same one - so it seems to make sense to have the overlay view in a separate view controller. Am I doing something fundamentally wrong, and that's why it's not behaving as I would expect? EDIT: The accepted answer for this question seems to indicate the exact problem I'm facing: http://stackoverflow.com/questions/548142/uiviewcontroller-rotate-methods

    Read the article

  • How do I send an XML document to an ASP.NET MVC page for manipuation

    - by Decker
    I have some hierarchical data stored as an multiple XML files on the server according to a vendor's schema. In my ASP.NET MVC (2!) application, I'd like the user to choose one of these hierarchies (i.e. file -- I provide a list in my controller's Index action). When the user selects one to "edit" my edit action should return a page that presents the XML hierarchy (it's a representation of a folder tree). So my thoughts are that the view would return HTML that contained a JQuery on load ajax call back to the server for the XML data -- at which point I would present the tree using one of the many JQuery tree controls. On the client side I'd like the user to manipulate the tree and when done, I'd like to post back the new hierarchy where I would replace the original XML file that represents that hierarchy. So my questions are: What form should I use to send the data down? XML or JSON?. If I send down XML then I would have to not only read the XML -- which JQuery can do -- but I would also have to be able to modify that XML and then send it back. Can I use JQuery to modify this XML DOM? And will all the namespace declarations be preserved? What form should I send the data back? If I originally sent the client the hierarchy as JSON (using JsonResult), then presumably I would have a hierarchy of javascript objects. What options would I have to post that back? Would I have to recreate the XML reprentation on the client and post that back? Or should I serialize back to JSON, post that to the server, and then have the server do the work of recreating the XML according to the schema. Thanks for any advice.

    Read the article

  • How to call stored proc from ASP.Net MVC stack via the ORM & return them in json?

    - by melaos
    Hi guys, i'm a total newbie with asp.net mvc and here's my jam: i have a 3 level list box which selection on box A shows options on box B and selection on box B will show the options for box C. I'm trying to do the whole thing in asp.net MVC and what i see is that the nerd dinner tutorial uses the ORM method. so i created a dbml to the database and drag the stored proc inside. i create a datacontext object but i don't quite know how to connect the result from the stored proce which should be multiple rows of data and make it into a json. so i can keep all the json data inside the html page and using jquery i could make the selection process faster. i don't expect the data inside the three boxes to change so often thus i think this method should be quite viable. Questions: So how do i get the stored proc part to return the data as json? i've noticed some tutorial online that the json return result part is at the controller and not at the model end. Why is that?

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • Using Excel as front end to Access database (with VBA)

    - by Alex
    I am building a small application for a friend and they'd like to be able to use Excel as the front end. (the UI will basically be userforms in Excel). They have a bunch of data in Excel that they would like to be able to query but I do not want to use excel as a database as I don't think it is fit for that purpose and am considering using Access. [BTW, I know Access has its shortcomings but there is zero budget available and Access already on friend's PC] To summarise, I am considering dumping a bunch of data into Access and then using Excel as a front end to query the database and display results in a userform style environment. Questions: How easy is it to link to Access from Excel using ADO / DAO? Is it quite limited in terms of functionality or can I get creative? Do I pay a performance penalty (vs.using forms in Access as the UI)? Assuming that the database will always be updated using ADO / DAO commands from within Excel VBA, does that mean I can have multiple Excel users using that one single Access database and not run into any concurrency issues etc.? Any other things I should be aware of? I have strong Excel VBA skills and think I can overcome Access VBA quite quickly but never really done Excel / Access link before. I could shoehorn the data into Excel and use as a quasi-database but that just seems more pain than it is worth (and not a robust long term solution) Any advice appreciated. Alex

    Read the article

  • x86_64 printf segfault after brk call

    - by gmb11
    While i was trying do use brk (int 0x80 with 45 in %rax) to implement a simple memory manager program in assembly and print the blocks in order, i kept getting segfault. After a while i could only reproduce the error, but have no idea why is this happening: .section .data helloworld: .ascii "hello world" .section .text .globl _start _start: push %rbp mov %rsp, %rbp movq $45, %rax movq $0, %rbx #brk(0) should just return the current break of the programm int $0x80 #incq %rax #segfault #addq $1, %rax #segfault movq $0, %rax #works fine? #addq $1, %rax #segfault again? movq $helloworld, %rdi call printf movq $1, %rax #exit int $0x80 In the example here, if the commented lines are uncommented, i have a segfault, but some commands (like de movq $0, %rax) work just fine. In my other program, the first couple printf work, but the third crashes... Looking for other questions, i heard that printf sometimes allocates some memory, and that the brk shouldn't be used, because in this case it corrupts the heap or something... I'm very confused, does anyone know something about that? EDIT: I've just found out that for printf to work you need %rax=0.

    Read the article

  • autocommit and @Transactional and Cascading with spring, jpa and hibernate

    - by subes
    Hi, what I would like to accomplish is the following: have autocommit enabled so per default all queries get commited if there is a @Transactional on a method, it overrides the autocommit and encloses all queries into a single transaction, thus overriding the autocommit if there is a @Transactional method that calls other @Transactional annotated methods, the outer most annotation should override the inner annotaions and create a larger transaction, thus annotations also override eachother I am currently still learning about spring-orm and couldn't find documentation about this and don't have a test project for this yet. So my questions are: What is the default behaviour of transactions in spring? If the default differs from my requirement, is there a way to configure my desired behaviour? Or is there a totally different best practice for transactions? --EDIT-- I have the following test-setup: @javax.persistence.Entity public class Entity { @Id @GeneratedValue private Integer id; private String name; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } @Repository public class Dao { @PersistenceContext private EntityManager em; public void insert(Entity ent) { em.persist(ent); } @SuppressWarnings("unchecked") public List<Entity> selectAll() { List<Entity> ents = em.createQuery("select e from " + Entity.class.getName() + " e").getResultList(); return ents; } } If I have it like this, even with autocommit enabled in hibernate, the insert method does nothing. I have to add @Transactional to the insert or the method calling insert for it to work... Is there a way to make @Transactional completely optional?

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • How do I process a nested list?

    - by ddbeck
    Suppose I have a bulleted list like this: * list item 1 * list item 2 (a parent) ** list item 3 (a child of list item 2) ** list item 4 (a child of list item 2 as well) *** list item 5 (a child of list item 4 and a grand-child of list item 2) * list item 6 I'd like to parse that into a nested list or some other data structure which makes the parent-child relationship between elements explicit (rather than depending on their contents and relative position). For example, here's a list of tuples containing an item and a list of its children (and so forth): [('list item 1',), ('list item 2', [('list item 3',), [('list item 4', [('list item 5'),]] ('list item 6',)] I've attempted to do this with plain Python and some experimentation with Pyparsing, but I'm not making progress. I'm left with two major questions: What's the strategy I need to employ to make this work? I know recursion is part of the solution, but I'm having a hard time making the connection between this and, say, a Fibonacci sequence. I'm certain I'm not the first person to have done this, but I don't know the terminology of the problem to make fruitful searches for more information on this topic. What problems are related to this so that I can learn more about solving these kinds of problems in general?

    Read the article

  • Is jQuery always the answer?

    - by Kibbee
    I've come across a couple questions, such as this one, and I really have to wonder why "Use jQuery" seems to be the answer when somebody asks how to do something in JavaScript. I understand that jQuery can save you a lot of time, and can help you out a lot, especially when you are doing a lot of fancy JavaScript in your site. However, in instances like this, and in many other instances, it seems like it's just jumping around the problem instead of answering the question. I also feel like this builds too much dependency into libraries. I've seen way too many developers that simply rely too much on libraries, and if they encounter a situation where they didn't have the library, they would be completely unable to function. I feel like there are already enough developers who don't know JavaScript, without just telling everybody to not learn JavaScript, and use jQuery. So, just to reiterate the question. Do you think there's too much of a tendency to use jQuery, for small pieces of JavaScript, when most of the functionality of jQuery isn't being used. Should developers be fluent in the use of bare JavaScript so they don't get too dependent on using libraries? [Additional related conversation topic] Does the existence of jQuery give too much slack to web browser developers who write the JavaScript engines? If we just have workarounds to cover all the inconsistencies in JavaScript, what pressure is there on browser makers to ensure that their JavaScript library works as it should. I feel like this extrapolates the same problem discussed in SO Podcast #36 of "be conservative in what you send, liberal in what you accept". By being so liberal with bad JavaScript engines, and using a common library to work around the flaws, we are promoting their use, and extending the problem.

    Read the article

  • SQL Server 2008: CASE vs IF-ELSE-IF vs GOTO

    - by Saharsh Shah
    I have some rules in my application and I have written the business logic of that rules in my procedure. At the time of creation of procedure I came to know that CASE statement won't work in my scenario. So I have tried two ways to perform same operations (using IF-ELSE-IF or GOTO) shown as below. Method 1 Using IF-ELSE-IF conditions: DECLARE @V_RuleId SMALLINT; IF (@V_RuleId = 1) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 2) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 3) BEGIN /*My business logic*/ END /* ... ... ... ...*/ ELSE IF (@V_RuleId = 19) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 20) BEGIN /*My business logic*/ END Method 2 Using GOTO statement: DECLARE @V_RuleId SMALLINT, @V_Temp VARCHAR(100); SET @V_Temp = 'GOTO RULE' + CONVERT(VARCHAR, @V_RuleId); EXECUTE sp_executesql @V_Temp; RULE1: BEGIN /*My business logic*/ END RULE2: BEGIN /*My business logic*/ END RULE3: BEGIN /*My business logic*/ END /* ... ... ... ...*/ RULE19: BEGIN /*My business logic*/ END RULE20: BEGIN /*My business logic*/ END Today I have 20 rules. It can be increase to any number in future. If I can able to use CASE statement then I have not any problem with performance, but I can't do that so I am worried about the performance of my procedure. Also one thing to be noticed that this procedure will execute very frequently by application. My questions are: Is there any way to use CASE statement in my procedure? If not, which method is best to use in my procedure to improve the performance of my code? Thanks in advance...

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Using Active Directory to authenticate users in a WWW facing website

    - by Basiclife
    Hi, I'm looking at starting a new web app which needs to be secure (if for no other reason than that we'll need PCI accreditation at some point). From previous experience working with PCI (on a domain), the preferred method is to use integrated windows authentication which is then passed all the way through the app to the database. This allows for better auditing as well as object-level permissions (ie an end user can't read the credit card table). There are advantages in that even if someone compromises the webserver, they won't be able to glean any additional information from the database. Also, the webserver isn't storing any database credentials (beyond perhaps a simple anonymous user with very few permissions) So, now I'm looking at the new web app which will be on the public internet. One suggestion is to have a Active Directory server and create windows accounts on the AD for each user of the site. These users will then be placed into the appropriate NT groups to decide which DB permissions they should have (and which pages they can access). ASP already provides the AD membership provider and role provider so this should be fairly simple to implement. There are a number of questions around this - Scalability, reliability, etc... and I was wondering if there is anyone out there with experience of this approach or, even better, some good reasons why to do it / not to do it. Any input appreciated Regards Basiclife

    Read the article

  • How to make a good programming interview?

    - by luckyluke
    I am doing interviews with from time to time to recruit some not bad people. And I really think I AM NOT doing to correct Job. I work in a company when We have to do a lot o DB programming, .NET programming, Java programming, so we need people who are open minded and not focused on a particular tech. Afterall language is a notation, You have to understand what is going under the hood. I ask people about their project, ask them some coding questions (believe me a SQL question involving a CROSS JOIN is hard), let them write some code, ask them about oo design, ask them how they update their knowledge, and stay up to date, do they have FUN when they code (at least sometimes). Hell I even give them a coding solution for home (3 hours max) to see how they think and code. And yet my hit rate at hiring junior member (those who live over the initial 3 months) is just about 33%. So my question, how do YOU make the good interviews, because I think my hit rate is to low? Do you have any best-practices(should be at least 60-70%)? p.s. And i noticed that: the best programmers are lazy, but motivated, just being lazy is not enough:) But people who write the best code are attentive to details:)

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • How to emulate "-lib foo.jar" from _within_ build.xml

    - by Thorbjørn Ravn Andersen
    By specifying "-lib foo.jar" to ant I get the behaviour that the classes in foo.jar is added to the ant classloader and are available for various tasks taking a class name argument. I'd like to be able to specify the same behaviour but only from inside build.xml (so we can do this on a vanilla ant). For taskdefs we have functioning code looking like: <taskdef resource="net/sf/antcontrib/antlib.xml" description="for/foreach tasks"> <classpath> <pathelement location="${active.workspace}/ant-contrib-1.X/lib/ant-contrib.jar" /> </classpath> </taskdef> where the definition is completely provided from the ant-contrib.jar listed. What is the equivalent mechanism for the "global" ant classpath? (I have thought out that this is the way to get <javac> use ecj-3.5.jar to compile with on a JRE - http://stackoverflow.com/questions/2364006/specifying-the-eclipse-compiler-completely-from-within-build-xml - in a way compatible with ant 1.7. Better suggestions are welcome :) EDIT: It appears that the about-to-be-released version 1.0 of ant4eclipse includes ecj. This does not answer the question, but may solve my basic problem.

    Read the article

  • Internet Radio Station for University

    - by ryan
    I am trying to help my University Student Radio station rethink the setup of the way they stream music, but I have some questions regarding the use of Ubuntu to stream music. Currently, the radio station uses two windows machines: one of which is used to stream the radio station and serve the website, and the other is used by rotating djs to select songs and create playlists. The computer used by djs feeds mono into the sound card of the server and the server streams the feed online. -Ideally I would like to maintain a two-computer setup: One computer as server, and another that is used to select and play music by rotating djs. -I would like to use Ubuntu for the server. -I would like to use Windows for the other machine. -The server should be able to stream song information. First, is there a way to somehow get the song information from an analog feed? Second, what is the best streaming server for radio? I have encountered shoutcast, icecast, and darwin, but I don't know where to begin in attempting to gauge them. Finally, if anyone has any tips or pointers about small internet radio station management/ setup they would be appreciated as this is my first radio station, and I am eager to hear of past experiences.

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >