Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 508/589 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Good Replacement for User Control?

    - by David Lively
    I found user controls to be incredibly useful when working with ASP.NET webforms. By encapsulating the code required for displaying a control with the markup, creation of reusable components was very straightforward and very, very useful. While MVC provides convenient separation of concerns, this seems to break encapsulation (ie, you can add a control without adding or using its supporting code, leading to runtime errors). Having to modify a controller every time I add a control to a view seems to me to integrate concerns, not separate them. I'd rather break the purist MVC ideology than give up the benefits of reusable, packaged controls. I need to be able to include components similar to webforms user controls throughout a site, but not for the entire site, and not at a level that belongs in a master page. These components should have their own code not just markup (to interact with the business layer), and it would be great if the page controller didn't need to know about the control. Since MVC user controls don't have codebehind, I can't see a good way to do this. I've searched previous SO questions, and have yet to find a good answer. Options so far In an attempt to avoid turning the comments section into a discussion... RenderAction This allows the view to call another controller, which will be responsible for interacting with the BLL and whatever data is necessary to its corresponding view. The calling view needs to be aware of the sub controller. This seems to provide a nice way to encapsulate partial views and controls, without having to modify the calling controller. RenderPartial The calling controller is still responsible for executing whatever code is associated with the partial view, and making sure that the model passed to the partial view contains the data it expects. Effectively, modifying the partial view potentially means modifying the calling controller. Annoying especially if this is used in multiple places. Portable Areas Place each control in its own project/area?

    Read the article

  • How can I timeout Client-scoped variables in Coldfusion?

    - by Joshua Carmody
    I apologize if this is a "duh" question. It seems like the answer should be easily googleable, but I haven't found it yet. I am working on a large Coldfusion application that stores a large amount of session/user data in the Client scope (ie <cfset Client.UserName = "JoshuaC"> ). I did not write this application, and I don't have the luxury of significantly refactoring it. I've been given the task of setting the Client variables to time out after 72 hours. I'm not entirely sure how to do this. If I had written the application, I would have stored the variables in the Session scope, and then changed the sessiontimeout attribute of the CFAPPLICATION tag. As it is though, I'm not sure if that timeout affects the Client variables, or what their level of persistence is. The way the application works now, the Client variables never time out, and only clearing the user's cookies, or visiting a logout page which sets all the Client-scoped application variables to "", will clear the values. Of course, I could create some kind of timestamp variable like Client.LastAccessDateTime, and put something in the Application.cfm to clear the client variables if that datetime is more than 72 hours prior to Now(). But there's got to be a better way, right?

    Read the article

  • Creating a HTTP handler for IIS that transparently forwards request to different port?

    - by Lasse V. Karlsen
    I have a public web server with the following software installed: IIS7 on port 80 Subversion over apache on port 81 TeamCity over apache on port 82 Unfortunately, both Subversion and TeamCity comes with their own web server installations, and they work flawlessly, so I don't really want to try to move them all to run under IIS, if that is even possible. However, I was looking at IIS and I noticed the HTTP redirect part, and I was wondering... Would it be possible for me to create a HTTP handler, and install it on a sub-domain under IIS7, so that all requests to, say, http://svn.vkarlsen.no/anything/here is passed to my HTTP handler, which then subsequently creates a request to http://localhost:81/anything/here, retrieves the data, and passes it on to the original requestee? In other words, I would like IIS to handle transparent forwards to port 81 and 82, without using the redirection features. For instance, Subversion doesn't like HTTP redirect and just says that the repository has been moved, and I need to relocate my working copy. That's not what I want. If anyone thinks this can be done, does anyone have any links to topics I need to read up on? I think I can manage the actual request parts, even with authentication, but I have no idea how to create a HTTP handler. Also bear in mind that I need to handle sub-paths and documents beneath the top-level domain, so http://svn.vkarlsen.no/whatever/here needs to be handled by a single handler, I cannot create copies of the handler for all sub-directories since paths are created from time to time.

    Read the article

  • is there a better way to write this frankenstein LINQ query that searches for values in a child tabl

    - by MRV
    I have a table of Users and a one to many UserSkills table. I need to be able to search for users based on skills. This query takes a list of desired skills and searches for users who have those skills. I want to sort the users based on the number of desired skills they posses. So if a users only has 1 of 3 desired skills he will be further down the list than the user who has 3 of 3 desired skills. I start with my comma separated list of skill IDs that are being searched for: List<short> searchedSkillsRaw = skills.Value.Split(',').Select(i => short.Parse(i)).ToList(); I then filter out only the types of users that are searchable: List<User> users = (from u in db.Users where u.Verified == true && u.Level > 0 && u.Type == 1 && (u.UserDetail.City == city.SelectedValue || u.UserDetail.City == null) select u).ToList(); and then comes the crazy part: var fUsers = from u in users select new { u.Id, u.FirstName, u.LastName, u.UserName, UserPhone = u.UserDetail.Phone, UserSkills = (from uskills in u.UserSkills join skillsJoin in configSkills on uskills.SkillId equals skillsJoin.ValueIdInt into tempSkills from skillsJoin in tempSkills.DefaultIfEmpty() where uskills.UserId == u.Id select new { SkillId = uskills.SkillId, SkillName = skillsJoin.Name, SkillNameFound = searchedSkillsRaw.Contains(uskills.SkillId) }), UserSkillsFound = (from uskills in u.UserSkills where uskills.UserId == u.Id && searchedSkillsRaw.Contains(uskills.SkillId) select uskills.UserId).Count() } into userResults where userResults.UserSkillsFound > 0 orderby userResults.UserSkillsFound descending select userResults; and this works! But it seems super bloated and inefficient to me. Especially the secondary part that counts the number of skills found. Thanks for any advice you can give. --r

    Read the article

  • An algo for generating code callgraphs

    - by Shrey
    I am working on a project which requires generating some metrices of a code (it can be C/C++/Java/Python). One of the metrices can be that I create a callgraph after parsing the code entered (the programs are expected to be small - probably under 1000L). As of now, I am looking for a way to create a program (it can be C/Python) which can take as input a file (C/C++/Python/Java) and then create a textual output containing approximate calling sequence as well as tokens in the code file. As of now, I have looked at some other tools which do the same thing - like splint, pylint, codeviz etc. So, I have two ways of solving my problem: Read and understand the algorithm these tools use (tokenization-graph generation etc) Or, have a basic algo (something like very high level steps) and then sit down to create each of them as I want them to be. I know, re-inventing the wheel is not a good idea, but, I would still like to give option (2) a shot. Only issue is, currently I am a blank. My question: Does any one have any knowhow about how to create code graphs? Any hints as to what I should do? Any top levels steps which I can follow? Thanks a lot.

    Read the article

  • How can i mock or test my deferred execution functionality?

    - by cottsak
    I have what could be seen as a bizarre hybrid of IQueryable<T> and IList<T> collections of domain objects passed up my application stack. I'm trying to maintain as much of the 'late querying' or 'lazy loading' as possible. I do this in two ways: By using a LinqToSql data layer and passing IQueryable<T>s through by repositories and to my app layer. Then after my app layer passing IList<T>s but where certain elements in the object/aggregate graph are 'chained' with delegates so as to defer their loading. Sometimes even the delegate contents rely on IQueryable<T> sources and the DataContext are injected. This works for me so far. What is blindingly difficult is proving that this design actually works. Ie. If i defeat the 'lazy' part somewhere and my execution happens early then the whole thing is a waste of time. I'd like to be able to TDD this somehow. I don't know a lot about delegates or thread safety as it applies to delegates acting on the same source. I'd like to be able to mock the DataContext and somehow trace both methods of deferring (IQueryable<T>'s SQL and the delegates) the loading so that i can have tests that prove that both functions are working at different levels/layers of the app/stack. As it's crucial that the deferring works for the design to be of any value, i'd like to see tests fail when i break the design at a given level (separate from the live implementation). Is this possible?

    Read the article

  • This is a great job opportunity!!! [closed]

    - by Stuart Gordon
    ASP.NET MVC Web Developer / London / £450pd / £25-£50,000pa / Interested contact [email protected] ! As a web developer within the engineering department, you will work with a team of enthusiastic developers building a new ASP.NET MVC platform for online products utilising exciting cutting edge technologies and methodologies (elements of Agile, Scrum, Lean, Kanban and XP) as well as developing new stand-alone web products that conform to W3C standards. Key Responsibilities and Objectives: Develop ASP.NET MVC websites utilising Frameworks and enterprise search technology. Develop and expand content management and delivery solutions. Help maintain and extend existing products. Formulate ideas and visions for new products and services. Be a proactive part of the development team and provide support and assistance to others when required. Qualification/Experience Required: The ideal candidate will have a web development background and be educated to degree level in a Computer Science/IT related course plus ASP.NET MVC experience. The successful candidate needs to be able to demonstrate commercial experience in all or most of the following skills: Essential: ASP.NET MVC with C# (Visual Studio), Castle, nHibernate, XHTML and JavaScript. Experience of Test Driven Development (TDD) using tools such as NUnit. Preferable: Experience of Continuous Integration (TeamCity and MSBuild), SQL Server (T-SQL), experience of source control such as Subversion (plus TortioseSVN), JQuery. Learn: Fluent NHibernate, S#arp Architecture, Spark (View engine), Behaviour Driven Design (BDD) using MSpec. Furthermore, you will possess good working knowledge of W3C web standards, web usability, web accessibility and understand the basics of search engine optimisation (SEO). You will also be a quick learner, have good communication skills and be a self-motivated and organised individual.

    Read the article

  • OpenMP: Get total number of running threads

    - by Konrad Rudolph
    I need to know the total number of threads that my application has spawned via OpenMP. Unfortunately, the omp_get_num_threads() function does not work here since it only yields the number of threads in the current team. However, my code runs recursively (divide and conquer, basically) and I want to spawn new threads as long as there are still idle processors, but no more. Is there a way to get around the limitations of omp_get_num_threads and get the total number of running threads? If more detail is required, consider the following pseudo-code that models my workflow quite closely: function divide_and_conquer(Job job, int total_num_threads): if job.is_leaf(): # Recurrence base case. job.process() return left, right = job.divide() current_num_threads = omp_get_num_threads() if current_num_threads < total_num_threads: # (1) #pragma omp parallel num_threads(2) #pragma omp section divide_and_conquer(left, total_num_threads) #pragma omp section divide_and_conquer(right, total_num_threads) else: divide_and_conquer(left, total_num_threads) divide_and_conquer(right, total_num_threads) job = merge(left, right) If I call this code with a total_num_threads value of 4, the conditional annotated with (1) will always evaluate to true (because each thread team will contain at most two threads) and thus the code will always spawn two new threads, no matter how many threads are already running at a higher level. I am searching for a platform-independent way of determining the total number of threads that are currently running in my application.

    Read the article

  • ojspc always returns 0 on errors

    - by Matt McCormick
    In my Ant build.xml file, I am trying to compile JSPs using ojspc. The files are being compiled, however, the build process is still running to completion when the JSP compilation has errors. This is part of my build.xml: <java fork="true" jar="${env.ORACLE_HOME}\j2ee\home\ojspc.jar" resultproperty="result"> <jvmarg value="-Djava.compiler=NONE"/> <arg value="-extend"/> <arg value="com.orionserver.http.OrionHttpJspPage"/> <arg value="-batchMask"/> <arg value="*.jsp"/> <arg value="${target-directory}/build/target/ear/${module-dir-name}-jsp.war"/> </java> <echo level="info">Result Property: ${result}</echo> I have tried setting the property failonerror="true" but that does not change anything. I receive the following output: [java] Detected archive, now processing contents of ../build/target/ear/web-module-jsp.war... [java] Setting up temp area... [java] Expanding archive in temp area... [java] C:\DOCUME~1\MMCCOR~1\LOCALS~1\Temp\tmp12940\_web_2d_inf\_jsp\_password.java:60: cannot resolve symbol [java] symbol : variable reqvst [java] location: class _web_2d_inf._jsp._password [java] out.print(reqvst.getAttribute("test")); [java] ^ [java] 1 error [java] Creating D:\eclipse-workspace\jdw\build\..\build\target\ear\web-module-jsp.war ... [java] Removing temp area... [echo] Result Property: 0 ...(more commands) BUILD SUCCESSFUL In the password.jsp file, I intentionally introduced an error to test. How can I get the build to fail on an error? At the Ant Java page, I am confused by: By default the return code of a is ignored. Alternatively, you can set resultproperty to the name of a property and have it assigned to the result code (barring immutability, of course). When you set failonerror="true", the only possible value for resultproperty is 0. Any non-zero response is treated as an error and would mean the build exits.

    Read the article

  • Save and Load Dictionary from encrypted or unreadable or binary format ?

    - by Prix
    I have a dictionary like the follow: public Dictionary<int, SpawnList> spawnEntities = new Dictionary<int, SpawnList>(); The class being used is as follow: public class SpawnList { public int NpcID { get; set; } public string Name { get; set; } public int Level { get; set; } public int TitleID { get; set; } public int StaticID { get; set; } public entityType Status { get; set; } public int TypeA { get; set; } public int TypeB { get; set; } public int TypeC { get; set; } public int ZoneID { get; set; } public int Heading { get; set; } public float PosX { get; set; } public float PosY { get; set; } public float PosZ { get; set; } } /// <summary>Entity type enum.</summary> public enum entityType { Ally, Enemy, SummonPet, NPC, Object, Monster, Gatherable, Unknown } How could I save this Dictionary to either a binary or encrypted format so I could later Load it again into my application ? My limitation is .Net 3.5 can't use anything higher.

    Read the article

  • Sync version of the project with Java

    - by Alexadar
    I have a problem, I need to sync version of the project. Changes are performed on main project that resides on central Coldfusion server. There are about 30 remote Coldfusion servers, that has to sync with latest version on central server. New synch application will to be done in Java! I have a direct link to each remote location. Mostly Windows OS and Windows tools are used. My idea: The synchronization is performed at the level of synchronization of folders. I would like to avoid the use of SVN and similar tools. My idea is to carry out comparison of folders on the local server (comparison is performed between the old and new versions),with NO communication with remote servers, make the difference between the versions and the same is sent to the remote machines. We will install Tomcat on every remote server. "Sync application" deployed on Tomcat will take care about differences. Remote server will return me an answer, about the success of the sync. Any kind of suggestion or completely new approach on this topic is more than welcome. Thanks Best regards

    Read the article

  • Async task ASP.net HttpContext.Current.Items is empty - How do handle this?

    - by GuruC
    We are running a very large web application in asp.net MVC .NET 4.0. Recently we had an audit done and the performance team says that there were a lot of null reference exceptions. So I started investigating it from the dumps and event viewer. My understanding was as follows: We are using Asyn Tasks in our controllers. We rely on HttpContext.Current.Items hashtable to store a lot of Application level values. Task<Articles>.Factory.StartNew(() => { System.Web.HttpContext.Current = ControllerContext.HttpContext.ApplicationInstance.Context; var service = new ArticlesService(page); return service.GetArticles(); }).ContinueWith(t => SetResult(t, "articles")); So we are copying the context object onto the new thread that is spawned from Task factory. This context.Items is used again in the thread wherever necessary. Say for ex: public class SomeClass { internal static int StreamID { get { if (HttpContext.Current != null) { return (int)HttpContext.Current.Items["StreamID"]; } else { return DEFAULT_STREAM_ID; } } } This runs fine as long as number of parallel requests are optimal. My questions are as follows: 1. When the load is more and there are too many parallel requests, I notice that HttpContext.Current.Items is empty. I am not able to figure out a reason for this and this causes all the null reference exceptions. 2. How do we make sure it is not null ? Any workaround if present ? NOTE: I read through in StackOverflow and people have questions like HttpContext.Current is null - but in my case it is not null and its empty. I was reading one more article where the author says that sometimes request object is terminated and it may cause problems since dispose is already called on objects. I am doing a copy of Context object - its just a shallow copy and not a deep copy.

    Read the article

  • Another developer revoked and re-created my client's iOS Distribution Certificate - does this mean I can never update my client's existing app?

    - by Schnapple
    Here is the story so far: A client hired us to do an iPhone app for them. This client had never done an iPhone app before and as part of the arrangement we handled all aspects for them, including app store submission, and we handle some level of future development (new features, bug/security fixes, etc.) We created a Distribution certificate and key pair on the client's behalf We developed the app, published it to the App Store without incident Some time later the client hired a second developer to do a different app for them This second developer, it appears, has revoked the existing Distribution certificate and created a new one with a new key pair on their system This second developer shared the new Distribution certificate and key pair with us for future reference. Due to user error, this new certificate and key pair has now been imported onto the Macintosh where the original certificate and key pair for the original app we developed were created and the originals were not backed up. So we have App #1 on the App Store with Distribution certificate/key pair #1 App #2 either on the App Store or soon to be using Distribution certificate/key pair #2 Distribution certificate/key pair #1 appears to be lost now So my question is: if we ever need to update App #1, will we be able to, using Distribution certificate/key pair #2? Or will we have to upload it as a new app?

    Read the article

  • How can I build a wrapper to wait for listening on a port?

    - by BillyBBone
    Hi, I am looking for a way of programmatically testing a script written with the asyncore Python module. My test consists of launching the script in question -- if a TCP listen socket is opened, the test passes. Otherwise, if the script dies before getting to that point, the test fails. The purpose of this is knowing if a nightly build works (at least up to a point) or not. I was thinking the best way to test would be to launch the script in some kind of sandbox wrapper which waits for a socket request. I don't care about actually listening for anything on that port, just intercepting the request and using that as an indication that my test passed. I think it would be preferable to intercept the open socket request, rather than polling at set intervals (I hate polling!). But I'm a bit out of my depths as far as how exactly to do this. Can I do this with a shell script? Or perhaps I need to override the asyncore module at the Python level? Thanks in advance, - B

    Read the article

  • When should EntityManagerFactory instance be created/opened ?

    - by masato-san
    Ok, I read bunch of articles/examples how to write Entity Manager Factory in singleton. One of them easiest for me to understand a bit: http://javanotepad.blogspot.com/2007/05/jpa-entitymanagerfactory-in-web.html I learned that EntityManagerFactory (EMF) should only be created once preferably in application scope. And also make sure to close the EMF once it's used (?) So I wrote EMF helper class for business methods to use: public class EmProvider { private static final String DB_PU = "KogaAlphaPU"; public static final boolean DEBUG = true; private static final EmProvider singleton = new EmProvider(); private EntityManagerFactory emf; private EmProvider() {} public static EmProvider getInstance() { return singleton; } public EntityManagerFactory getEntityManagerFactory() { if(emf == null) { emf = Persistence.createEntityManagerFactory(DB_PU); } if(DEBUG) { System.out.println("factory created on: " + new Date()); } return emf; } public void closeEmf() { if(emf.isOpen() || emf != null) { emf.close(); } emf = null; if(DEBUG) { System.out.println("EMF closed at: " + new Date()); } } }//end class And my method using EmProvider: public String foo() { EntityManager em = null; List<Object[]> out = null; try { em = EmProvider.getInstance().getEntityManagerFactory().createEntityManager(); Query query = em.createNativeQuery(JPQL_JOIN); //just some random query out = query.getResultList(); } catch(Exception e) { //handle error.... } finally { if(em != null) { em.close(); //make sure to close EntityManager } } I made sure to close EntityManager (em) within method level as suggested. But when should EntityManagerFactory be closed then? And why EMF has to be singleton so bad??? I read about concurrency issues but as I am not experienced multi-thread-grammer, I can't really be clear on this idea.

    Read the article

  • Is it rude to add "TODO: wtf?" in source code?

    - by mafutrct
    I encountered something ... well, you know TDWTF... something like that in an international project I'm working on. The code was written by a team mate. For a second I was tempted to add // TODO: wtf? to the infringing code but restrained myself. The project is indeed on a professional level, but for internal conversation, more colloquial language is used - but still no "bad" words as in "wtf". Usually, I'd surely not add such a comment, but I believe there are a few factors that allow consideration still: 1. It is not visible except as a comment in the source code (of course). 2. It is internal to our team - other developers may happen see it but it is not their code. 3. Comments in source code are usually accepted to be more colloquial, since it is "kept between us developers". Would you totally advise to never add such a comment? Or do you regard it as an edge case? Did you possibly add something similar yourself?

    Read the article

  • Accessing program information that gdb sees in C++

    - by anon
    I have a program written in C++, on Linux, compiled with -g. When I run it under gdb, I can 1) set breakpoints 2) at those breakpoints, print out variables 3) see the stackframe 4) given a variable that's a structure, print out parts of the structure (i.e. how ddd displays information). Now, given that my program is compiled with "-g" -- is there anyway that I can access this power within my program itself? I.e. given that my program is compiled with "-g", is there some std::vector<string> getStackFrame(); function I can call to get the current stackframe at the current point of execution? Given a pointer to an object and it's type ... can I do std::vector getClassMember(class_name); ? I realize the default answer is "no, C++ doesn't support that level of introspection" -- however, recall I'm on linux, my program is compiled with "-g", and gdb can do it, so clearly the inforamtion is there. Question is: is there some API for accessing it? EDIT: PS Naysers, I'd love to see a reason for closing this question.

    Read the article

  • regexp target last main li in list

    - by veilig
    I need to target the starting tag of the last top level LI in a list that may or may-not contain sublists in various positions - without using CSS or Javascript. Is there a simple/elegant regexp that can help with this? I'm no guru w/ them, but it appears the need for greedy/non-greedy selectors when I'm selecting all the middle text (.*) / (.+) changes as nested lists are added and moved around in the list - and this is throwing me off. $pattern = '/^(<ul>.*)<li>(.+<\/li><\/ul>)$/'; $replacement = '$1<li id="lastLi">$3'; Perhaps there is an easier approach?? converting to XML to target the LI and then convert back? ie: Single Element <ul> <li>TARGET</li> </ul> Multiple Elements <ul> <li>foo</li> <li>TARGET</li> </ul> Nested Lists before end <ul> <li> foo <ul> <li>bar</li> </ul> <li> <li>TARGET</li> </ul> Nested List at end <ul> <li>foo</li> <li> TARGET <ul> <li>bar</li> </ul> </li> </ul>

    Read the article

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • c# - what approach can I use to extend a group of classes that include implemented methods? (see des

    - by Greg
    Hi, I want to create an extendible package I am writing that has Topology, Node & Relationship classes. The idea is these base classes would have the various methods in them necessary to base graph traversal methods etc. I would then like to be able to reuse this by extending the package. For example the base requirements might see Relationship with a parentNode & childNode. Topology would have a List of Nodes and List of Relationships. Topology would have methods like FindChildren(int depth). Then the usage would be to extend these such that additional attributes for Node and Relationships could be added etc. QUESTION - What would be the best approach to package & expose the base level classes/methods? (it's kind of like a custom collection but with multiple facets). Would the following concepts come into play: Interfaces - would this be a good idea to have ITopology, INode etc, or is this not required as the user would extend these classes anyway? Abstract Classes - would the base classes be abstract classes Custom Generic Collection - would some approach using this concept assist (but how would this work if there are the 3 different classes) thanks

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • C++ Suppress Automatic Initialization and Destruction

    - by Travis G
    How does one suppress the automatic initialization and destruction of a type? While it is wonderful that T buffer[100] automatically initializes all the elements of buffer, and destroys them when they fall out of scope, this is not the behavior I want. #include <iostream> static int created = 0, destroyed = 0; struct S { S() { ++created; } ~S() { ++destroyed; } }; template <typename T, size_t KCount> class Array { private: T m_buffer[KCount]; public: Array() { // some way to suppress the automatic initialization of m_buffer } ~Array() { // some way to suppress the automatic destruction of m_buffer } }; int main() { { Array<S, 100> arr; } std::cout << "Created:\t" << created << std::endl; std::cout << "Destroyed:\t" << destroyed << std::endl; return 0; } The output of this program is: Created: 100 Destroyed: 100 I would like it to be: Created: 0 Destroyed: 0 My only idea is to make m_buffer some trivially constructed and destructed type like char and then rely on operator[] to wrap the pointer math for me, although this seems like a horribly hacked solution. Another solution would be to use malloc and free, but that gives a level of indirection that I do not want.

    Read the article

  • ZeroMQ REQ/REP on ipc:// and concurrency

    - by Metiu
    I implemented a JSON-RPC server using a REQ/REP 0MQ ipc:// socket and I'm experiencing strange behavior which I suspect is due to the fact that the ipc:// underlying unix socket is not a real socket, but rather a single pipe. From the documentation, one has to enforce strict zmq_send()/zmq_recv() alternation, otherwise the out-of-order zmq_send() will return an error. However, I expected the enforcement to be per-client, not per-socket. Of course with a Unix socket there is just one pipeline from multiple clients to the server, so the server won't know who it is talking with. Two clients could zmq_send() simultaneously and the server would see this as an alternation violation. The sequence could be: ClientA: zmq_send() ClientB: zmq_send() : will it block until the other send/receive completes? will it return -1? (I suspect it will with ipc:// due to inherent low-level problems, but with TCP it could distinguish the two clients) ClientA: zmq_recv() ClientB: zmq_recv() so what about tcp:// sockets? Will it work concurrently? Should I use some other locking mechanism to work around this?

    Read the article

  • AJAX or a server side framework?

    - by Romansky
    I am working with a friend on building a web site, in general this web site will be a custom web app along with a very custom social network type of thing.. Currently I have a mock-up site that uses simple PHP with AJAX and JSON and JQUERY and I love how it works, I love the way it all fits together. But for a mock-up I did not implement any of the Social Network design patterns such as a login, rating, groups etc.. This brought me to a higher level of decision making requirement, I need to decide if I want to develop all this functionality by hand or use some kind of a framework. I spent this entire day researching, and it would seem that using Drupal and such frameworks will make the Social Network part easy (overlooking the customization requirement for now..) but will make client side Web App development less so. I found some other frameworks that are more developer friendly (customizable) such as Zend and Symfony etc.. but these seem to take allot of the power from the client and implement it in the server side, to me this seems a waste (and an unjustified performance bottleneck) .. Finally I found Aptana Jaxer framework that seems to think the same way I feel. That said it seems a bit under-developed, I didn't find modules for a social network and the community around it seems thin.. (searching Jaxer in StackOverflow returns few results) So other then making server side DB comm a bit simpler it does not help me greatly.. My requirements are a good facility to develop web apps on while containing all the user centric logic usually used for social networks in advance. What would you recommend? EDIT: OK, lats fine tune this question, after considering this abit further, is there a good down loadable source of a social network site in PHP that I can work around in building my web app? (I really like using JQUERY AJAX JSON etc..)

    Read the article

  • DNS protocol message example

    - by virtual-lab
    hello there, I am trying to figure out how to send out DNS messages from an application socket adapter to a DNSBL. I spent the last two days understanding the basics, including experimenting with WireShark to catch an example of message exchanged. Now I would like to query the DNS without using dig or host command (I'm using Ubuntu); how can I perform this action at low level, without the help of these tools in wrapping the request in a proper DNS message format? How the message should be post it? Hex or String? Thanks in advance for any help. Regards Alessandro Ilardo Comment added I am investigating on JDev and Oracle SOA. The platform provides a Socket Adapter which simply apply a transformation (XSLT) and send the message straight to the socket. How the payload parameters (ex. the host I'm looking up) are wrapped within the message is left to the developer. So basically I have an idea on how the all DNS message is structured, but rather than put everything on JDev stright away I'd like to make some tests on my own just to make sure I got a valid message format. So, I am not using any specific language (I don't even understand why they moved my question from serverfault) and I don't want to use any tools which would hide part of the message, such as the header. I know they work well btw. I guess this stuff has something to do with packet injection. Someone suggested me to use telnet, but I've only used for SMTP or HTTP, I haven't got a clue on how it works for DNS request. Does it make more sense now?

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >