Search Results

Search found 16530 results on 662 pages for 'off shore'.

Page 616/662 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • Multiset container appears to stop sorting

    - by Sarah
    I would appreciate help debugging some strange behavior by a multiset container. Occasionally, the container appears to stop sorting. This is an infrequent error, apparent in only some simulations after a long time, and I'm short on ideas. (I'm an amateur programmer--suggestions of all kinds are welcome.) My container is a std::multiset that holds Event structs: typedef std::multiset< Event, std::less< Event > > EventPQ; with the Event structs sorted by their double time members: struct Event { public: explicit Event(double t) : time(t), eventID(), hostID(), s() {} Event(double t, int eid, int hid, int stype) : time(t), eventID( eid ), hostID( hid ), s(stype) {} bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; ... }; The program iterates through periods of adding events with unordered times to EventPQ currentEvents and then pulling off events in order. Rarely, after some events have been added (with perfectly 'legal' times), events start getting executed out of order. What could make the events ever not get ordered properly? (Or what could mess up the iterator?) I have checked that all the added event times are legitimate (i.e., all exceed the current simulation time), and I have also confirmed that the error does not occur because two events happen to get scheduled for the same time. I'd love suggestions on how to work through this. The code for executing and adding events is below for the curious: double t = 0.0; double nextTimeStep = t + EPID_DELTA_T; EventPQ::iterator eventIter = currentEvents.begin(); while ( t < EPID_SIM_LENGTH ) { // Add some events to currentEvents while ( ( *eventIter ).time < nextTimeStep ) { Event thisEvent = *eventIter; t = thisEvent.time; executeEvent( thisEvent ); eventCtr++; currentEvents.erase( eventIter ); eventIter = currentEvents.begin(); } t = nextTimeStep; nextTimeStep += EPID_DELTA_T; } void Simulation::addEvent( double et, int eid, int hid, int s ) { assert( currentEvents.find( Event(et) ) == currentEvents.end() ); Event thisEvent( et, eid, hid, s ); currentEvents.insert( thisEvent ); }

    Read the article

  • Mac Safari randomly recreating cookie when I refresh my login screen. Very bizarre

    - by mcintyre321
    We have found an issue in our app where Safari on the Mac randomly recreates a login cookie from a logged off session. I have a fiddler archive with this behaviour here. Note that some stuff has been removed from this to make it easier to get, but nothing which sets a cookie or anything has been taken out - only repetitions of requests 3-8. I'll talk you through the running order Request 1: user logs out via call to /logout.aspx - Set-Cookie returned setting cookie expiry date to 1999 Requests 2-8: user refreshes login page sending calls to root or /res/en-US/s.js - no cookie is sent to server or received back, and access is denied. I have cut out a lot of requests of this nature from the log as they are boring Request 9: request for /res/en-US/s.js - Hv3 authentication cookie has mysteriously reappeared! Wat. There was NO set-cookie! WTFF! Request 10+ : now the cookie has reappeared, the site logs the user in AGAIN The cookie, when examined in Safari looks like <dict> <key>Created</key> <real>259603523.26834899</real> <key>Domain</key> <string>.mysite.dev</string> <key>Expires</key> <date>2010-03-24T16:05:22Z</date> <key>HttpOnly</key> <string>TRUE</string> <key>Name</key> <string>.Hv3</string> <key>Path</key> <string>/</string> </dict> One thing to note is that in Safari, the cookie domain is .mysite.dev not mysite.dev (which is the cookie domain specified in web.config) - however, given that access is denied in requests 2-8, it looks like the cookie has expired OK. If you look in the list of cookies in the browser during 2-8, the .Hv3 cookie is not there. Is this our bug or Safari's? What can I do to stop it happening?

    Read the article

  • Best Practices for Working with Multiple Monitors in Visual Studio 2010

    - by Clever Human
    Now that Visual Studio 2010 has support for multiple monitors, I am curious how other people have their environments arranged. I have yet to come up with an arrangement that I am really satisfied with. The current best I have come up with for my 2 monitor system is to have all code windows detached. Then, on my primary monitor, I am able to have two code windows side by side (using the Windows 7 keyboard shortcuts WinKey+LeftArrow and WinKey+RightArrow.) On my secondary monitor I put the rest of the IDE with all of the tool windows that are normally on the bottom (errors list, find window, call stack, etc...) docked where the code windows normally go. I've also tried having all those things detached and having almost nothing in the IDE proper. The problems with this layout are: Newly opened code windows always open in the IDE, not on top of one of the detached windows. Detached code windows do not remember their exact placement from session to session (they are slightly off, having me to use the winkey + arrow key shortcut again and again for each window. There seems to be no way to have the code panes aware that they are on top of one another (IE -- multiple tabs.) The CTRL+TAB shortcut always displays on top of the IDE proper. The Code Panes are always "on top" of (children of) the IDE. So clicking on any code pane brings the IDE to the foreground, even when I care only about that code pane, and not the IDE. Other more minor issues... What would go a long way to making this better is having the code panes detach such that they are tab strips that can have other code panes docked within them. The new multi-monitor support in VS2010 is good, but it still seems really lacking. Can these issues be solved with an add-in? If so, is anyone aware of one? Is there a better way to work with the IDE on multiple monitors than what I am doing? NOTE: While this question is subjective (there is certainly no "this is the best way and that's final" answer) I'd really like to know possibly better methods of working with the IDE than what I have come up with. The intent is not to start a "mine's best" flame war.

    Read the article

  • AJAX: Problems returning multiple variables

    - by fwaokda
    First off sorry if I'm missing something simple just started working with AJAX today. I have an issue where I'm trying to get information from my database, but different records have different amounts of values. For instance, each record has a "features" column. In the features column I store a string. (ex: Feature1~Feature2~Feature3~Feature4... ) When I'm building the object I take apart the string and store all the features into an array. Some objects can have 1 feature others can have up to whatever. So... how do I return this values back to my ajax function from my php page? Below is my ajax function that I was trying and I'll provide a link with my php file. [ next.php : http://pastebin.com/SY74jV7X ] $("a#next").click(function() { $.ajax({ type : 'POST', url : 'next.php', dataType : 'json', data : { nextID : $("a#next").attr("rel") }, success : function ( data ) { var lastID = $("a#next").attr("rel"); var originID = $("a#next").attr("rev"); if(lastID == 1) { lastID = originID; } else { lastID--; } $("img#spotlight").attr("src",data.spotlightimage); $("div#showcase h1").text(data.title); $("div#showcase h2").text(data.subtitle); $("div#showcase p").text(data.description); $("a#next").attr("rel", lastID); for(var i=0; i < data.size; i++) { $("ul#features").append("<li>").text(data.feature+i).append("</li>"); } /* for(var j=1; j < data.picsize; j++) { $("div.thumbnails ul").append("<li>").text(data.image+j).append("</li>"); } */ }, error : function ( XMLHttpRequest, textStatus, errorThrown) { $("div#showcase h1").text("An error has occured: " + errorThrown); } }); });

    Read the article

  • organizing external libraries and include files

    - by stijn
    Over the years my projects use more and more external libraries, and the way I did it starts feeling more and more awkward (although, that has to be said, it does work flawlessly). I use VS on Windows, CMake on others, and CodeComposer for targetting DSPs on Windows. Except for the DSPs, both 32bit and 64bit platforms are used. Here's a sample of what I am doing now; note that as shown, the different external libraries themselves are not always organized in the same way. Some have different lib/include/src folders, others have a single src folder. Some came ready-to-use with static and/or shared libraries, others were built /path/to/projects /projectA /projectB /path/to/apis /apiA /src /include /lib /apiB /include /i386/lib /amd64/lib /path/to/otherapis /apiC /src /path/to/sharedlibs /apiA_x86.lib -->some libs were built in all possible configurations /apiA_x86d.lib /apiA_x64.lib /apiA_x64d.lib /apiA_static_x86.lib /apiB.lib -->other libs have just one import library /path/to/dlls -->most of this directory also gets distributed to clients /apiA_x86.dll and it's in the PATH /apiB.dll Each time I add an external libary, I roughly use this process: build it, if needed, for different configurations (release/debug/platform) copy it's static and/or import libraries to 'sharedlibs' copy it's shared libraries to 'dlls' add an environment variable, eg 'API_A_DIR' that points to the root for ApiA, like '/path/to/apis/apiA' create a VS property sheet and a CMake file to state include path and eventually the library name, like include = '$(API_A_DIR)/Include' and lib = apiA.lib add the propertysheet/cmake file to the project needing the library It's especially step 4 and 5 that are bothering me. I am pretty sure I am not the only one facing this problem, and would like see how others deal with this. I was thinking to get rid of the environment variables per library, and use just one 'API_INCLUDE_DIR' and populating it with the include files in an organized way: /path/to/api/include /apiA /apiB /apiC This way I do not need the include path in the propertysheets nor the environment variables. For libs that are only used on windows I even don't need a propertysheet at all as I can use #pragmas to instruct the linker what library to link to. Also in the code it will be more clear what gets included, and no need for wrappers to include files having the same name but are from different libraries: #include <apiA/header.h> #include <apiB/header.h> #include <apiC_version1/header.h> The withdrawal is off course that I have to copy include files, and possibly** introduce duplicates on the filesystem, but that looks like a minor price to pay, doesn't it? ** actually once libraries are built, the only thing I need from them is the include files and thie libs. Since each of those would have a dedicated directory, the original source tree is not needed anymore so can be deleted..

    Read the article

  • Printing a variable only when it changes?

    - by user1781639
    First off, my question was a little vague or confusing since I'm not really sure how to phrase my question to be specific. I'm trying to query a database of stockists for a Knitting company (school project using PHP) but I'm looking to print the city as a heading instead of with each stockists information. Here is what I have at the moment: $sql = "SELECT * FROM mc16korustockists where locale = 'south'"; $result = pg_exec($sql); $nrows = pg_numrows($result); print $nrows; $items = pg_fetch_all($result); print_r($items); for ($i=0; $i<$nrows2; $i++) { print "<h2>"; print $items[$i]['city']; print "</h2>"; print $items[$i]['name']; print $items[$i]['address']; print $items[$i]['city']; print $items[$i]['phone']; print "<br />"; print "<br />"; } I'm querying the database for all of the data in it, the rows being ref, name, address, city and phone, and executing it. Querying the number of rows then using that to determine how many iterations for the loop to run is all fine but what I'd like to have is for the h2 heading to appear above the for ($i=0;) line. Trying just breaks my page so that might be out of the question. I figure I'd have to count the number of entries in 'city' until it detects a change then change the heading to that name I think? That or make a heap of queries and set a variable for each name but at point, I might as well do it manually (and I highly doubt it would be best practice). Oh, and I'd welcome any critiques to my PHP since I'm just starting out. Thanks and if you need any more information, just ask! P.S. Our class is learning with PostgreSQL instead of MySQL as you can see in the tags.

    Read the article

  • Realtime MySQL search results on an advanced search page

    - by Andrew Heath
    I'm a hobbyist, and started learning PHP last September solely to build a hobby website that I had always wished and dreamed another more competent person might make. I enjoy programming, but I have little free time and enjoy a wide range of other interests and activities. I feel learning PHP alone can probably allow me to create 98% of the desired features for my site, but that last 2% is awfully appealing: The most powerful tool of the site is an advanced search page that picks through a 1000+ record game scenario database. Users can data-mine to tremendous depths - this advanced page has upwards of 50 different potential variables. It's designed to allow the hardcore user to search on almost any possible combination of data in our database and it works well. Those who aren't interested in wading through the sea of options may use the Basic Search, which is comprised of the most popular parts of the Advanced search. Because the advanced search is so comprehensive, and because the database is rather small (less than 1,200 potential hits maximum), with each variable you choose to include the likelihood of getting any qualifying results at all drops dramatically. In my fantasy land where I can wield AJAX as if it were Excalibur, my users would have a realtime Total Results counter in the corner of their screen as they used this page, which would automatically update its query structure and report how many results will be displayed with the addition of each variable. In this way it would be effortless to know just how many variables are enough, and when you've gone and added one that zeroes out the results set. A somewhat similar implementation, at least visually, would be the Subtotal sidebar when building a new custom computer on IBuyPower.com For those of you actually still reading this, my question is really rather simple: Given the time & ability constraints outlined above, would I be able to learn just enough AJAX (or whatever) needed to pull this one feature off without too much trouble? would I be able to more or less drop-in a pre-written code snippet and tweak to fit? or should I consider opening my code up to a trusted & capable individual in the future for this implementation? (assuming I can find one...) Thank you.

    Read the article

  • How do make dependency generation work for C? (Also..decode this sed/make statement!)

    - by Derek
    Hi all. I have a make build system that I am trying to decipher that someone else wrote. I am getting an error when I run it on a redhat system, but not when I run it on my solaris system. The versions of gmake are the same major revision (one off on minor revision). This is for building a C project, and the make system has a global Makefile.global that is inherited by each directory's local Makefile The Makefile.global has all the targets in it, starting with all: $(LIB) $(BIN) where LIB builds libs and BIN builds binaries. jumping down the targets I have $(LIB) : $(GEN_LIB) $(GEN_LIB) : $(GEN_DEPS) $(GEN_OBJS) $(AR) $(ARFLAGS) $(GEN_LIB) $(GEN_OBJS) $(GEN_DEPS) : @set -e; rm -f $@; \ $(CC) $(CDEP_FLAG) $(CFLAGS) $(INCDIRS) `basename $@ | sed 's/\.d/\.c/' | sed 's,^,$(HOME_SRC)/,'` | sed 's,\(.*\)\.o: ,$(GEN_OBJDIR)/\1.o $@ :,g' > [email protected] ; \ cat [email protected] > $@ ; \ cat [email protected] | cut -d: -f2 | grep '\.h' | sed 's,\.h,.h :,g' >> $@ ; \ rm [email protected] $(GEN_OBJS) : $(CC) $(CFLAGS) $(INCDIRS) -c $(*F).c -lmpi -o $@ I think these are all the relevant targets I need to include to answer my question. Definitions of those variables: CC = icc CDEP_FLAG = -M CFLAGS = various compiler flags ifdef type flags INCDIRS = include directory where all .h files are GEN_OBJDIR = /lib/objs HOME_SRC = . GEN_LIB = lib/$(LIB) GEN_DEPDIR=/lib/deps GEN_DEPS = $(addprefix $(GEN_DEPDIR)/,$(addsuffix .d,$(basename $(OBJS)))) I think this has everything covered you need. Basically self explanatory from the names. Now as best I can tell, this is generating in /lib/deps a .d file that has the object and source dependencies in it. In other words, for the utilities.a library, I will get a utils.o and utils.c dependency stack, all in the file utils.d There is some syntax error that is being generated in that file I think, because I get the following error: ../lib/deps/util.d:25: *** target pattern contains no '%'. Stop. gmake[2]: *** [all] Error 2 gmake[1]: *** [all] Error 2 gmake: *** [all] Error 2 I am not sure if my error is in the dependency generation, or some further down part, like the object generation target? If you need further info, let me know, I will add to post

    Read the article

  • backbone.js - Having multiple instances of the same view

    - by TrueWheel
    I am having problems having multiple instances in of the same view in different div elements. When I try to initialize them only the second of the two elements appear no matter what order I put them in. Here is the code for my view. var BodyShapeView = Backbone.View.extend({ thingiview: null, scene: null, renderer: null, model: null, mouseX: 0, mouseY: 0, events:{ 'click button#front' : 'front', 'click button#diag' : 'diag', 'click button#in' : 'zoomIn', 'click button#out' : 'zoomOut', 'click button#on' : 'rotateOn', 'click button#off' : 'rotateOff', 'click button#wireframeOn' : 'wireOn', 'click button#wireframeOff' : 'wireOff', 'click button#distance' : 'dijkstra' }, initialize: function(name){ _.bindAll(this, 'render', 'animate'); scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera( 15, 400 / 700, 1, 4000 ); camera.position.z = 3; scene.add( camera ); camera.position.y = -5; var ambient = new THREE.AmbientLight( 0x202020 ); scene.add( ambient ); var directionalLight = new THREE.DirectionalLight( 0xffffff, 0.75 ); directionalLight.position.set( 0, 0, 1 ); scene.add( directionalLight ); var pointLight = new THREE.PointLight( 0xffffff, 5, 29 ); pointLight.position.set( 0, -25, 10 ); scene.add( pointLight ); var loader = new THREE.OBJLoader(); loader.load( "img/originalMeanModel.obj", function ( object ) { object.children[0].geometry.computeFaceNormals(); var geometry = object.children[0].geometry; console.log(geometry); THREE.GeometryUtils.center(geometry); geometry.dynamic = true; var material = new THREE.MeshLambertMaterial({color: 0xffffff, shading: THREE.FlatShading, vertexColors: THREE.VertexColors }); mesh = new THREE.Mesh(geometry, material); model = mesh; // model = object; scene.add( model ); } ); // RENDERER renderer = new THREE.WebGLRenderer(); renderer.setSize( 400, 700 ); $(this.el).find('.obj').append( renderer.domElement ); this.animate(); }, Here is how I create the instances var morphableBody = new BodyShapeView({ el: $("#morphable-body") }); var bodyShapeView = new BodyShapeView({ el: $("#mean-body") }); Any help would be really appreciated. Thanks in advance.

    Read the article

  • Simplifying const Overloading?

    - by templatetypedef
    Hello all- I've been teaching a C++ programming class for many years now and one of the trickiest things to explain to students is const overloading. I commonly use the example of a vector-like class and its operator[] function: template <typename T> class Vector { public: T& operator[] (size_t index); const T& operator[] (size_t index) const; }; I have little to no trouble explaining why it is that two versions of the operator[] function are needed, but in trying to explain how to unify the two implementations together I often find myself wasting a lot of time with language arcana. The problem is that the only good, reliable way that I know how to implement one of these functions in terms of the other is with the const_cast/static_cast trick: template <typename T> const T& Vector<T>::operator[] (size_t index) const { /* ... your implementation here ... */ } template <typename T> T& Vector<T>::operator[] (size_t index) { return const_cast<T&>(static_cast<const Vector&>(*this)[index]); } The problem with this setup is that it's extremely tricky to explain and not at all intuitively obvious. When you explain it as "cast to const, then call the const version, then strip off constness" it's a little easier to understand, but the actual syntax is frightening,. Explaining what const_cast is, why it's appropriate here, and why it's almost universally inappropriate elsewhere usually takes me five to ten minutes of lecture time, and making sense of this whole expression often requires more effort than the difference between const T* and T* const. I feel that students need to know about const-overloading and how to do it without needlessly duplicating the code in the two functions, but this trick seems a bit excessive in an introductory C++ programming course. My question is this - is there a simpler way to implement const-overloaded functions in terms of one another? Or is there a simpler way of explaining this existing trick to students? Thanks so much!

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • Linux termios VTIME not working?

    - by San Jacinto
    We've been bashing our heads off of this one all morning. We've got some serial lines setup between an embedded linux device and an Ubuntu box. Our reads are getting screwed up because our code usually returns two (sometimes more, sometimes exactly one) message reads instead of one message read per actual message sent. Here is the code that opens the serial port. InterCharTime is set to 4. void COMBaseClass::OpenPort() { cerr<< "openning port"<< port <<"\n"; struct termios newtio; this->fd = -1; int fdTemp; fdTemp = open( port, O_RDWR | O_NOCTTY); if (fdTemp < 0) { portOpen = 0; cerr<<"problem openning "<< port <<". Retrying"<<endl; usleep(1000000); return; } newtio.c_cflag = BaudRate | CS8 | CLOCAL | CREAD ;//| StopBits; newtio.c_iflag = IGNPAR; newtio.c_oflag = 0; /* set input mode (non-canonical, no echo,...) */ newtio.c_lflag = 0; newtio.c_cc[VTIME] = InterCharTime; /* inter-character timer in .1 secs */ newtio.c_cc[VMIN] = readBufferSize; /* blocking read until 1 char received */ tcflush(fdTemp, TCIFLUSH); tcsetattr(fdTemp,TCSANOW,&newtio); this->fd = fdTemp; portOpen = 1; } The other end is configured similarly for communication, and has one small section of particular iterest: while (1) { sprintf(out, "\r\nHello world %lu", ++ulCount); puts(out); WritePort((BYTE *)out, strlen(out)+1); sleep(2); } //while Now, when I run a read thread on the receiving machine, "hello world" is usually broken up over a couple messages. Here is some sample output: 1: Hello 2: world 1 3: Hello 4: world 2 5: Hello 6: world 3 where number followed by a colon is one message recieved. Can you see any error we are making? Thank you. Edit: For clarity, please view section 3.2 of this resource href="http://www.faqs.org/docs/Linux-HOWTO/Serial-Programming-HOWTO.html. To my understanding, with a VTIME of a couple seconds (meaning vtime is set anywhere between 10 and 50, trial-and-error), and a VMIN of 1, there should be no reason that the message is broken up over two separate messages.

    Read the article

  • WPF PathGeometry/RotateTransform optimization

    - by devinb
    I am having performance issues when rendering/rotating WPF triangles If I had a WPF triangle being displayed and it will be rotated to some degree around a centrepoint, I can do it one of two ways: Programatically determine the points and their offset in the backend, use XAML to simply place them on the canvas where they belong, it would look like this: <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding CalculatedPointA, Mode=OneWay}"> <LineSegment Point="{Binding CalculatedPointB, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointC, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> </Path> Generate the 'same' triangle every time, and then use a RenderTransform (Rotate) to put it where it belongs. In this case, the rotation calculations are being obfuscated, because I don't have any access to how they are being done. <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding TriPointA, Mode=OneWay}"> <LineSegment Point="{Binding TriPointB, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointC, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> <Path.RenderTransform> <RotateTransform CenterX="{Binding Centre.X, Mode=OneWay}" CenterY="{Binding Centre.Y, Mode=OneWay}" Angle="{Binding Orientation, Mode=OneWay}" /> </Path.RenderTransform> </Path> My question is which one is faster? I know I should test it myself but how do I measure the render time of objects with such granularity. I would need to be able to time how long the actual rendering time is for the form, but since I'm not the one that's kicking off the redraw, I don't know how to capture the start time.

    Read the article

  • Maximum nametable char count exceeded

    - by doc
    I'm having issues with the maximum nametable char count quota, I followed a couple of answers here and it solved the problem for a while, but now I'm having the same issue. My Server side config is as follows: <system.serviceModel> <bindings> <netTcpBinding> <binding name="GenericBinding" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None" /> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="1000000" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="REMWCF.RemWCFSvc"> <endpoint address="" binding="netTcpBinding" contract="REMWCF.IRemWCFSvc" bindingConfiguration="GenericBinding" /> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9081/RemWCFSvc" /> </baseAddresses> </host> </service> </services> </system.serviceModel> I also have the same tcp binding on the devenv configuration. Have I reached the limit of contracts supported? Is there a way to turn off that quota? EDIT Error Message: Error: Cannot obtain Metadata from net.tcp://localhost:9081/RemWCFSvc/mex If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: net.tcp://localhost:9081/RemWCFSvc/mex Metadata contains a reference that cannot be resolved: 'net.tcp://localhost:9081/RemWCFSvc/mex'. There is an error in the XML document. The maximum nametable character count quota (16384) has been exceeded while reading XML data. The nametable is a data structure used to store strings encountered during XML processing - long XML documents with non-repeating element names, attribute names and attribute values may trigger this quota. This quota may be increased by changing the MaxNameTableCharCount property on the XmlDictionaryReaderQuotas object used when creating the XML reader. I'm getting that error when trying to run the WCF (which is hosted in a windows service app).

    Read the article

  • How can I force the server socket to re-accept a request from a client?

    - by Roman
    For those who does not want to read a long question here is a short version: A server has an opened socket for a client. The server gets a request to open a socket from the same client-IP and client-port. I want to fore the server not to refuse such a request but to close the old socket and open a new one. How can I do ti? And here is a long (original) question: I have the following situation. There is an established connection between a server and client. Then an external software (Bonjour) says to my client the it does not see the server in the local network. Well, client does nothing about that because of the following reasons: If Bonjour does not see the server it does not necessarily means that client cannot see the server. Even if the client trusts the Bonjour and close the socket it does not improve the situation ("to have no open socket" is worser that "to have a potentially bad socket"). So, client do nothing if server becomes invisible to Bonjour. But than the server re-appears in the Bonjour and Bonjour notify the client about that. In this situation the following situations are possible: The server reappears on a new IP address. So, the client needs to open a new socket to be able to communicate with the server. The server reappears on the old IP address. In this case we have two subcases: 2.1. The server was restarted (switched off and then switched on). So, it does not remember the old socket (which is still used by the client). So, client needs to close the old socket and open a new one (on the same server-IP address and the same server-port). 2.2. We had a temporal network problem and the server was running the whole time. So, the old socket is still available for the use. In this case the client does not really need to close the old socket and reopen a new one. But to simplify my life I decide to close and reopen the socket on the client side in any case (in spite on the fact that it is not really needed in the last described situation). But I can have problems with that solution. If I close the socket on the client side and than try to reopen a socket from the same client-IP and client-port, server will not accept the call for a new socket. The server will think that such a socket already exists. Can I write the server in such a way, that it does not refuse such calls. For example, if it (the server) sees that a client send a request for a socket from the same client-IP and client-port, it (server) close the available socket, associated with this client-IP and client-port and than it reopens a new socket.

    Read the article

  • DataGridView validating old value insted of new value.

    - by Scott Chamberlain
    I have a DataGridView that is bound to a DataTable, it has a column that is a double and the values need to be between 0 and 1. Here is my code private void dgvImpRDP_InfinityRDPLogin_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) { if (e.ColumnIndex == dtxtPercentageOfUsersAllowed.Index) { double percentage; if(dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value.GetType() == typeof(double)) percentage = (double)dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value; else if (!double.TryParse(dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value.ToString(), out percentage)) { e.Cancel = true; dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = "The value must be between 0 and 1"; return; } if (percentage < 0 || percentage > 1) { e.Cancel = true; dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = "The value must be between 0 and 1"; } } } However my issue when dgvImpRDP_InfinityRDPLogin_CellValidating fires dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value will contain the old value before the edit, not the new value. For example lets say the old value was .1 and I enter 3. The above code runs when you exit the cell and dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value will be .1 for that run, the code validates and writes 3 the data to the DataTable. I click on it a second time, try to leave, and this time it behaves like it should, it raises the error icon for the cell and prevents me from leaving. I try to enter the correct value (say .7) but the the Value will still be 3 and there is now no way out of the cell because it is locked due to the error and my validation code will never push the new value. Any recommendations would be greatly appreciated. EDIT -- New version of the code based off of Stuart's suggestion and mimicking the style the MSDN article uses. Still behaves the same. private void dgvImpRDP_InfinityRDPLogin_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) { if (e.ColumnIndex == dtxtPercentageOfUsersAllowed.Index) { dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = String.Empty; double percentage; if (!double.TryParse(dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].FormattedValue.ToString(), out percentage) || percentage < 0 || percentage > 1) { e.Cancel = true; dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = "The value must be between 0 and 1"; return; } } }

    Read the article

  • Facebook Connect from Localhost, doing some weird stuff

    - by Brett
    So maybe the documentation is out of date, or I am just off here. But I have done a slew of FB iframe apps (connect), but I am starting my first FB Connect site. Running it from localhost, and the Connect URL is http:// my_external_IP_address. When I click on the FB login button on my site, it pops up, says waiting for facebook, and it returns my site in that box, with the URL up top with the http:// mysite/?session={session key, user_id, etc.} The user_id is infact my FB id. And so it thinks I am logged in. If I close the popup, I'm not logged in. I'm not sure why the pop up isn't doing the normal fb connect dialog. I'm following these steps. (I added spaces to the http:// as to not be detected as 'spam') html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" right after <body> <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"> At the end, before the body close tag: script type="text/javascript"> FB.init("fbkey", "http://127.0.0.1/xd_receiver.htm"); I have tried using xd_receiver.htm, /xd_receiver.htm (and other combos), and that brings up a blank page. using the http://127.0.0.1 at least does something. In my config file, which is called before all of those, it checks for a PHP session key to see if they are logged in, if that doesn't exist it looks for a cookie, and if that doesn't exist it does this: require_once('includes/facebook.php'); $facebook = new Facebook($fbkey, $fbsec); $user_id = $facebook->get_loggedin_user(); if($user_id > 0){ $user = $ac->getUserFromFB($user_id); $_SESSION['user_id'] = $user['user_id']; } The user_id is always empty when I echo it out to the screen to test. The session event never occurs as well. So I don't know what it is doing in the popup, but I think Facebook thinks it is logging me in. Not sure. Pretty stumped on this one. Any help would be appreciated. Thanks!

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • index seems to be out of sync for jquery .keydown and .click methods

    - by Growler
    I'd like to navigate images using both keyboard and mouse (clicking left and right arrow images). I'm using Jquery to do this, but the shared imgIndex seems to be off from the .keydown function and the .click function... Whenever .keydown function -- or ++ the imgIndex, isn't that changed index also used in the click function? So shouldn't they always be on the same index? keydown function: <script type="text/javascript"> var imgArray = [<?php echo implode(',', getImages($site)) ?>]; $(document).ready(function() { var img = document.getElementById("showimg"); img.src = imgArray[<?php echo $imgid ?>]; var imgIndex = <?php echo $imgid ?>; alert(imgIndex); $(document).keydown(function (e) { var key = e.which; var rightarrow = 39; var leftarrow = 37; var random = 82; if (key == rightarrow) { imgIndex++; if (imgIndex > imgArray.length-1) { imgIndex = 0; } img.src = imgArray[imgIndex]; } if (key == leftarrow) { if (imgIndex == 0) { imgIndex = imgArray.length; } img.src = imgArray[--imgIndex]; } }); click function: Connected to left and right clickable images $("#next").click(function() { imgIndex++; if (imgIndex > imgArray.length-1) { imgIndex = 0; } img.src = imgArray[imgIndex]; }); $("#prev").click(function() { if (imgIndex == 0) { imgIndex = imgArray.length; } img.src = imgArray[--imgIndex]; }); }); Just so you have some visibility into the getImages php function: <?php function getImages($siteParam) { include 'dbconnect.php'; if ($siteParam == 'artwork') { $table = "artwork"; } else { $table = "comics"; } $catResult = $mysqli->query("SELECT id, title, path, thumb, views, catidFK FROM $table"); $img = array(); while($row = $catResult->fetch_assoc()) { $img[] = "'" . $row['path'] . "'"; } return $img; } ?> Much appreciated! Snapshot of where the script is on "view image.php"

    Read the article

  • The 80 column limit, still useful?

    - by Tim Post
    Related: While coding, how many columns do you format for? Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? I mostly use C, however this question is language agnostic. Its also subjective, so I'll tag it as such. Many individual projects set their own various coding standards, a guide to adjust your coding style. Many enforce an 80 column limit on code, i.e. don't force a dumb 80 x 25 terminal to wrap your lines in someone else's editor of choice if they are stuck with such a display, don't force them to turn off wrapping. Both private and open source projects usually have some style guidelines. My question is, in this day and age, is that requirement more of a pest than a helper? Does anyone still login via the local console with no framebuffer and actually edit code? If so, how often and why cant you use SSH? I help to manage a few open source projects, I was considering extending this limit to 110 columns, but I wanted to get feedback first. So, any feedback is appreciated. I can see the need to make certain OUTPUT of programs (i.e. a --help /h display) 80 columns or less, but I really don't see the need to force people to break up code under 110 columns long into 2 lines, when its easier to read on one line. I can also see the case for adhering to an 80 column limit if you're writing code that will be used on micro controllers that have to be serviced in the field with a god-knows-what terminal emulator. Beyond that, what are your thoughts? Edit: This is not an exact duplicate. I am asking very specific questions, such as how many people are actually still using such a display. I am also not asking "what is a good column limit", I'm proposing one and hoping to gather feedback. Beyond that, I'm also citing cases where the 80 column limit is still a good idea. I don't want a guide to my own "c-style", I'm hoping to adjust standards for several projects. If the duplicate in question had answered all of my questions, I would not have posted this one :) That will teach me to mention it next time. Edit 2 question |= COMMUNITY_WIKI

    Read the article

  • Graph Tour with Uniform Cost Search in Java

    - by user324817
    Hi. I'm new to this site, so hopefully you guys don't mind helping a nub. Anyway, I've been asked to write code to find the shortest cost of a graph tour on a particular graph, whose details are read in from file. The graph is shown below: http://img339.imageshack.us/img339/8907/graphr.jpg This is for an Artificial Intelligence class, so I'm expected to use a decent enough search method (brute force has been allowed, but not for full marks). I've been reading, and I think that what I'm looking for is an A* search with constant heuristic value, which I believe is a uniform cost search. I'm having trouble wrapping my head around how to apply this in Java. Basically, here's what I have: Vertex class - ArrayList<Edge> adjacencies; String name; int costToThis; Edge class - final Vertex target; public final int weight; Now at the moment, I'm struggling to work out how to apply the uniform cost notion to my desired goal path. Basically I have to start on a particular node, visit all other nodes, and end on that same node, with the lowest cost. As I understand it, I could use a PriorityQueue to store all of my travelled paths, but I can't wrap my head around how I show the goal state as the starting node with all other nodes visited. Here's what I have so far, which is pretty far off the mark: public static void visitNode(Vertex vertex) { ArrayList<Edge> firstEdges = vertex.getAdjacencies(); for(Edge e : firstEdges) { e.target.costToThis = e.weight + vertex.costToThis; queue.add(e.target); } Vertex next = queue.remove(); visitNode(next); } Initially this takes the starting node, then recursively visits the first node in the PriorityQueue (the path with the next lowest cost). My problem is basically, how do I stop my program from following a path specified in the queue if that path is at the goal state? The queue currently stores Vertex objects, but in my mind this isn't going to work as I can't store whether other vertices have been visited inside a Vertex object. Help is much appreciated! Josh

    Read the article

  • ListBox content does not resize when window is made smaller

    - by DamonGant
    I'm using .NET 4.0 (not .NET 4.0 CP) and have run into this kinda unique issue. I created a ListBox to display bound elements, first off here is (a part) of my XAML. <Grid Grid.Row="2" Background="#EEEEEE"> <Border Margin="6,10,10,10" BorderBrush="#666666" BorderThickness="1"> <ListBox ItemsSource="{Binding}" Name="appList" BorderThickness="0" HorizontalContentAlignment="Stretch" HorizontalAlignment="Stretch"> <ItemsControl.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="80" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Border Grid.Column="0" Margin="5" BorderThickness="3" CornerRadius="2" BorderBrush="Black" HorizontalAlignment="Left" VerticalAlignment="Top" x:Name="ItemBorder"> <Image Width="64" Height="64" Source="{Binding Path=IconUri}" Stretch="UniformToFill" /> </Border> <StackPanel Margin="0,5,5,5" Grid.Column="1" Orientation="Vertical" HorizontalAlignment="Stretch"> <TextBlock FontSize="18" Text="{Binding Path=DisplayName}" /> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="60"/> </Grid.ColumnDefinitions> <ProgressBar Grid.Column="0" Height="24" HorizontalAlignment="Stretch" IsIndeterminate="{Binding Path=IsDiscovering}" Value="{Binding Path=PercentageDownloaded}" /> <TextBlock Grid.Column="1" HorizontalAlignment="Center" VerticalAlignment="Center"><TextBlock x:Name="percentageDownloaded" /><TextBlock x:Name="percentageMeter">%</TextBlock></TextBlock> </Grid> </StackPanel> </Grid> <DataTemplate.Triggers> <DataTrigger Binding="{Binding Path=IsDiscovering}"> <DataTrigger.Value>True</DataTrigger.Value> <Setter TargetName="percentageDownloaded" Property="Text" Value="N/A" /> <Setter TargetName="percentageMeter" Property="Visibility" Value="Collapsed" /> </DataTrigger> <DataTrigger Binding="{Binding Path=IsDiscovering}"> <DataTrigger.Value>False</DataTrigger.Value> <Setter TargetName="percentageDownloaded" Property="Text" Value="{Binding Path=PercentageDownloaded}" /> <Setter TargetName="percentageMeter" Property="Visibility" Value="Visible" /> </DataTrigger> </DataTemplate.Triggers> </DataTemplate> </ItemsControl.ItemTemplate> </ListBox> </Border> </Grid> Sizing the window up stretches the ListBox content just fine, but when I size it down, it retains it's width and spawns vertical scrollbars.

    Read the article

  • BroadcastReciever doesn't show SMS not Send or Not delivered

    - by user1657111
    While using broadcastreciver for checking sms status, it shows the toast when sms is sent but shows nothing when sms is not sent or delivered (im testing it by putting an abrupt number). the code im using is the one ive seen the most on every site of checking sms delivery status. But my code is only showing the status when sms is sent successfully. Can any one get a hint of what am i doing wrong ? I hav this method in doInBackground() and so obviously im using AsyncTask. Thanks guys public void send_SMS(String list, String msg, AtaskClass task) { String SENT = "SMS_SENT"; String DELIVERED = "SMS_DELIVERED"; SmsManager sms = SmsManager.getDefault(); PendingIntent sentPI = PendingIntent.getBroadcast(this, 0, new Intent(SENT), 0); PendingIntent deliveredPI = PendingIntent.getBroadcast(this, 0, new Intent(DELIVERED), 0); //---when the SMS has been sent--- registerReceiver(new BroadcastReceiver(){ @Override public void onReceive(Context context, Intent arg1) { switch (getResultCode()) { case Activity.RESULT_OK: Toast.makeText(context, "SMS sent", Toast.LENGTH_SHORT).show(); break; case SmsManager.RESULT_ERROR_GENERIC_FAILURE: Toast.makeText(context, "Generic failure", Toast.LENGTH_SHORT).show(); break; case SmsManager.RESULT_ERROR_NO_SERVICE: Toast.makeText(context, "No service", Toast.LENGTH_SHORT).show(); break; case SmsManager.RESULT_ERROR_NULL_PDU: Toast.makeText(context, "Null PDU", Toast.LENGTH_SHORT).show(); break; case SmsManager.RESULT_ERROR_RADIO_OFF: Toast.makeText(context, "Radio off", Toast.LENGTH_SHORT).show(); break; } } }, new IntentFilter(SENT)); //---when the SMS has been delivered--- registerReceiver(new BroadcastReceiver(){ @Override public void onReceive(Context context, Intent arg1) { switch (getResultCode()) { case Activity.RESULT_OK: Toast.makeText(context, "SMS delivered", Toast.LENGTH_SHORT).show(); break; case Activity.RESULT_CANCELED: Toast.makeText(context, "SMS not delivered", Toast.LENGTH_SHORT).show(); break; } } }, new IntentFilter(DELIVERED)); StringTokenizer st = new StringTokenizer(list,","); int count= st.countTokens(); int i =1; count = 1; while(st.hasMoreElements()) { // PendingIntent pi = PendingIntent.getActivity(this,0,new Intent(this, SMS.class),0); String tempMobileNumber = (String)st.nextElement(); //SmsManager sms = SmsManager.getDefault(); sms.sendTextMessage(tempMobileNumber, null, msg , sentPI, deliveredPI); Double cCom = ((double)i/count) * 100; int j = cCom.intValue(); task.doProgress(j); i++; count ++; } // class ends }

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >