Search Results

Search found 4065 results on 163 pages for 'psychic debugging'.

Page 131/163 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Strange EListError occurance (when accessing variable-defined index)

    - by michal
    Hi, I have a TList which stores some objects. Now I have a function which does some operations on that list: function SomeFunct(const AIndex: integer): IInterface begin if (AIndex > -1) and (AIndex < fMgr.Windows.Count ) then begin if (fMgr.Windows[AIndex] <> nil) then begin if not Supports(TForm(fMgr.Windows[AIndex]), IMyFormInterface, result) then result:= nil; end; end else result:= nil; end; now, what is really strange is that accessing fMgr.Windows with any proper index causes EListError... However if i hard-code it (in example, replace AIndex with value 0 or 1) it works fine. I tried debugging it, the function gets called twice, with arguments 0 and 1 (as supposed). while AIndex = 0, evaluating fMgr.Windows[AIndex] results in EListError at $someAddress, while evaluating fMgr.Windws[0] instead - returns proper results ... what is even more strange, even though there is an EListError, the function returns proper data ... and doesn't show anything. Just info on two EListError memory leaks on shutdown (using FastMM) any ideas what could be wrong?! Thanks in advance michal

    Read the article

  • std::locale breakage on MacOS 10.6 with LANG=en_US.UTF-8

    - by fixermark
    I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding. Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program: #include <locale> int main ( int argc, char *argv [] ) { std::locale::global(std::locale("")); return 0; } This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration. My questions are: Is this expected behavior for this code on MacOS 10.6? What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library. For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode. edit The error given by the above code on 10.6.1 is: $ ./locale terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Abort trap

    Read the article

  • How to transfer value from a DropDownList to a property of a serialized model object

    - by Richard77
    Hello, I'm testing some concepts in ASP.NET MVC multisteps (Style Wizards) with a small application which allow me to records organizations in a database. To make things easier, I've a class OrganizationFormModelView that contains an object of class Organization and a property called ParentOrgList of SelectList type. The only purpose of the selectList property is to be used by a DropDownList. I've also serialize OrganizationFormModelView to get the multisteps Wizard effect. In my first view (or first step), I use a dropdownlist helper to assign a value to one of the the Organization's property called ParentOrganization, which draws data from the ParentOrgList. ... <% = Html.DropDownList("Organization.ParentOrganization", Model.ParentOrgList)%> ... The first time the page loads, I'm able to make a choice. And, my choice is reflected in my object Model all along the wizard' steps(see Visual studio in debugging mode). But, when any time I'm redirected back to the first view (first step), I get the following error message: "The ViewData item with the key 'Organization.ParentOrganization' is of type 'System.String' but needs to be of type 'IEnumerable'." Thanks for helping

    Read the article

  • What is causing my HTTP server to fail with "exit status -1073741819"?

    - by Keeblebrox
    As an exercise I created a small HTTP server that generates random game mechanics, similar to this one. I wrote it on a Windows 7 (32-bit) system and it works flawlessly. However, when I run it on my home machine, Windows 7 (64-bit), it always fails with the same message: exit status -1073741819. I haven't managed to find anything on the web which references that status code, so I don't know how important it is. Here's code for the server, with redundancy abridged: package main import ( "fmt" "math/rand" "time" "net/http" "html/template" ) // Info about a game mechanic type MechanicInfo struct { Name, Desc string } // Print a mechanic as a string func (m MechanicInfo) String() string { return fmt.Sprintf("%s: %s", m.Name, m.Desc) } // A possible game mechanic var ( UnkillableObjects = &MechanicInfo{"Avoiding Unkillable Objects", "There are objects that the player cannot touch. These are different from normal enemies because they cannot be destroyed or moved."} //... Race = &MechanicInfo{"Race", "The player must reach a place before the opponent does. Like \"Timed\" except the enemy as a \"timer\" can be slowed down by the player's actions, or there may be multiple enemies being raced against."} ) // Slice containing all game mechanics var GameMechanics []*MechanicInfo // Pseudorandom number generator var prng *rand.Rand // Get a random mechanic func RandMechanic() *MechanicInfo { i := prng.Intn(len(GameMechanics)) return GameMechanics[i] } // Initialize the package func init() { prng = rand.New(rand.NewSource(time.Now().Unix())) GameMechanics = make([]*MechanicInfo, 34) GameMechanics[0] = UnkillableObjects //... GameMechanics[33] = Race } // serving var index = template.Must(template.ParseFiles( "templates/_base.html", "templates/index.html", )) func randMechHandler(w http.ResponseWriter, req *http.Request) { mechanics := [3]*MechanicInfo{RandMechanic(), RandMechanic(), RandMechanic()} if err := index.Execute(w, mechanics); err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } } func main() { http.HandleFunc("/", randMechHandler) if err := http.ListenAndServe(":80", nil); err != nil { panic(err) } } In addition, the unabridged code, the _base.html template, and the index.html template. What could be causing this issue? Is there a process for debugging a cryptic exit status like this?

    Read the article

  • Getting size of a specific byte array from an array of pointers to bytes

    - by Pat James
    In the following example c code, used in an Arduino project, I am looking for the ability to get the size of a specific byte array within an array of pointers to bytes, for example void setup() { Serial.begin(9600); // for debugging byte zero[] = {8, 169, 8, 128, 2,171,145,155,141,177,187,187,2,152,2,8,134,199}; byte one[] = {8, 179, 138, 138, 177 ,2,146, 8, 134, 8, 194,2,1,14,199,7, 145, 8,131, 8,158,8,187,187,191}; byte two[] = {29,7,1,8, 169, 8, 128, 2,171,145,155,141,177,187,187,2,152,2,8,134,199, 2, 2, 8, 179, 138, 138, 177 ,2,146, 8, 134, 8, 194,2,1,14,199,7, 145, 8,131, 8,158,8,187,187,191}; byte* numbers[3] = {zero, one, two }; function(numbers[1], sizeof(numbers[1])/sizeof(byte)); //doesn't work as desired, always passes 2 as the length function(numbers[1], 25); //this works } void loop() { } void function( byte arr[], int len ) { Serial.print("length: "); Serial.println(len); for (int i=0; i<len; i++){ Serial.print("array element "); Serial.print(i); Serial.print(" has value "); Serial.println((int)arr[i]); } } In this code, I understand that sizeof(numbers1)/sizeof(byte)) doesn't work because numbers1 is a pointer and not the byte array value. Is there a way in this example that I can, at runtime, get at the length of a specific (runtime-determined) byte array within an array of pointers to bytes? Understand that I am limited to developing in c (or assembly) for an Arduino environment. Also open to other suggestions rather than the array of pointers to bytes. The overall objective is to organize lists of bytes which can be retrieved, with length, at runtime.

    Read the article

  • Why can't I build this Javascript object?

    - by Alex Mcp
    I have an object I'm trying to populate from another object (that is, iterate over a return object to produce an object with only selected values from the original). My code looks like this: var collect = {}; function getHistoricalData(username){ $.getJSON("http://url/" + username + ".json?params", function(data){ for (var i=0; i < data.length; i++) { console.log(i); collect = { i : {text : data[i].text}}; $("#wrap").append("<span>" + data[i].text + "</span><br />"); }; console.log(collect); }); } So I'm using Firebug for debugging, and here's what I know: The JSON object is intact console.log(i); is showing the numbers 1-20 as expected When I log the collect object at the end, it's structure is this: var collect = { i : {text : "the last iteration's text"}}; So the incrementer is "applying" to the data[i].text and returning the text value, but it's not doing what I expected, which is create a new member of the collect object; it's just overwriting collect.i 20 times and leaving me with the last value. Is there a different syntax I need to be using for assigning object members? I tried collect.i.text = and collect[i].text = and the error was that whatever I tried was undefined. I'd love to know what's going on here, so the more in-depth an explanation the better. Thanks!

    Read the article

  • Accessing objects on one nib file from another nib file

    - by ASN
    I have two nib files Main.nib and Preferernces.nib I have a class CalendarView.m that inherits from NSView .It has a method for drawing calendar - (void)drawCalendar; In Main.nib window I have NSWindow(My Main window) which has an NSView item on it for displaying calendar.Main window has an NSPopUp button that shows a menu when application runs. Menu has a 'Preferences' menu item which on clicking show a preferences panel(NSPanel) that panel is in Preferences.nib file.Panel has a colorwell item .When application is executed clcking on colorwell show a color panel to choose color .But I am unable to apply that color to my calendar. I have another class PreferencesWindowController.m that shows preferences panel . It has a method changeColor that takes selected color from colors panel and make changes to user defaults . I have IBOutlet CalendarView *calView as a member in PreferencesWindowController.h class. In changeColor mehod I am writing - calView = [[CalendarView alloc] init]; [calView drawCalendar]; On debugging call goes to drawCalendar method of CalendarView but skips some part of it and goes to end of function without redrawing. On restarting the application color is applied but I want it to happen while application is executing, so that there is no need to rerun the application to view changes.

    Read the article

  • Using libcurl to create a valid POST

    - by Haraldo
    static int get( const char * cURL, const char * cParam ) { CURL *handle; CURLcode result; std::string buffer; char errorBuffer[CURL_ERROR_SIZE]; //struct curl_slist *headers = NULL; //headers = curl_slist_append(headers, "Content-Type: Multipart/Related"); //headers = curl_slist_append(headers, "type: text/xml"); // Create our curl handle handle = curl_easy_init(); if( handle ) { curl_easy_setopt(handle, CURLOPT_ERRORBUFFER, errorBuffer); //curl_easy_setopt(handle, CURLOPT_HEADER, 0); //curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers); curl_easy_setopt(handle, CURLOPT_POST, 1); curl_easy_setopt(handle, CURLOPT_POSTFIELDS, cParam); curl_easy_setopt(handle, CURLOPT_POSTFIELDSIZE, strlen(cParam)); curl_easy_setopt(handle, CURLOPT_FOLLOWLOCATION, 1); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, Request::writer); curl_easy_setopt(handle, CURLOPT_WRITEDATA, &buffer); curl_easy_setopt(handle, CURLOPT_USERAGENT, "libcurl-agent/1.0"); curl_easy_setopt(handle, CURLOPT_URL, cURL); result = curl_easy_perform(handle); curl_easy_cleanup(handle); } if( result == CURLE_OK ) { return atoi( buffer.c_str() ); } return 0; } Hi there, first of all I'm having trouble debugging this in visual studio express 2008 so I'm unsure what buffer.c_str() might actually be returning but I am outputting 1 or 0 to the web page being posted to. Therefore I'm expecting the buffer to be one or the other, however I seem to only be returning 0 or equivalent. Does the code above look like it will return what I expect or should my variable types be different? The conversion using "atoi" may be an issue. Any thought would be much appreciated.

    Read the article

  • UAC need for console application

    - by Daok
    I have a console application that require to use some code that need administrator level. I have read that I need to add a Manifest file myprogram.exe.manifest that look like that : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator"> </requestedPrivileges> </security> </trustInfo> </assembly> But it still doesn't raise the UAC (in the console or in debugging in VS). How can I solve this issue? Update I am able to make it work if I run the solution in Administrator or when I run the /bin/*.exe in Administrator. I am still wondering if it's possible to have something that will pop when the application start instead of explicitly right clickRun as Administrator?

    Read the article

  • The best way to build iPhone/iPad static libraries?

    - by Derek Clarkson
    Hi everyone, I have a couple of static Phone/iPad libraries I an working on. The problem I am looking for advise on is the best way to package the libraries. My objective is to make it easy to use the libraries in other projects and include the correct one in a build with minimal problems. To make it more interesting I currently build 4 versions of each library as follows armv6/armv7 release (devices) i386 release (simulator) armv6/armv7 debug (devices) i386 debug (simulator) The difference between the release and debug versions is that the debug versions contain a lot of NSLog(...) code which enables people to see whats going on internally as an aid to debugging. Currently when I build the whole projects I arrange the libraries into two directories like this: release lib-device.a lib-simulator.a debug lib-device.a lib-simulator.a This works ok except that when include in projects, both paths are added to the library search path and switching a target from one to the other is a pain. Or alternatively I end up with two targets. The alternative I am thinking of is to change the directories like this: release device lib.a simulator lib.a debug device lib.a simulator lib.a In playing with XCode is appears that all xcode uses the lbrary references of a project for is to get the name of the library file, which it then looks up in the library paths. Thus by parameterising the library path with the current built type and target device, I can effectively auto switch. What do you guys think? Is there a better way to do this? ciao Derek

    Read the article

  • How do I make software that preserves database integrity and correctness?

    - by user287745
    I have made an application project in Visual Studio 2008 C#, SQL Server from Visual Studio 2008. The database has like 20 tables and many fields in each. I have made an interface for adding deleting editing and retrieving data according to predefined needs of the users. Now I have to Make to project into software which I can deliver to my professor. That is, he can just double click the icon and the software simply starts. No Visual Studio 2008 needed to start the debugging. The database will be on one powerful computer (dual core latest everything Windows XP) and the user will access it from another computer connected using LAN. I am able to change the connection string to the shared database using Visual Studio 2008/ debugger whenever the server changes but how am I supposed to do that when it's software? There will by many clients. Am I supposed to give the same software to every one, so they all can connect to the database? How will the integrity and correctness of the database be maintained? I mean the db.mdf file will be in a folder which will be shared with read and write access. So it's not necessary that only one user will write at a time. So is there any coding for this or?

    Read the article

  • Passing Object to Service in WCF

    - by hgulyan
    Hi, I have my custom class Customer with its properties. I added DataContract mark above the class and DataMember to properties and it was working fine, but I'm calling a service class's function, passing customer instance as parameter and some of my properties get 0 values. While debugging I can see my properties values and after it gets to the function, some properties' values are 0. Why it can be so? There's no code between this two actions. DataContract mark workes fine, everything's ok. Any suggestions on this issue? I tried to change ByRef to ByVal, but it doesn't change anything. Why it would pass other values right and some of integer types just 0? Maybe the answer is simple, but I can't figure it out. Thank You. <DataContract()> Public Class Customer Private Type_of_clientField As Integer = -1 <DataMember(Order:=1)> Public Property type_of_client() As Integer Get Return Type_of_clientField End Get Set(ByVal value As Integer) Type_of_clientField = value End Set End Property End Class <ServiceContract(SessionMode:=SessionMode.Allowed)> <DataContractFormat()> Public Interface CustomerService <OperationContract()> Function addCustomer(ByRef customer As Customer) As Long End Interface type_of_client properties value is 6 before I call addCustomer function. After it enters that function the value is 0. UPDATE: The issue is in instance creating. When I create an instance of a class on client side, that is stored on service side, some of my properties pass 0 or nothing, but when I call a function of a service class, that returns a new instance of that class, it works fine. What's is the difference? Could that be serialization issue?

    Read the article

  • Why aren't my objects sorting with sortedArrayUsingDescriptors?

    - by clozach
    I expected the code below to return the objects in imageSet as a sorted array. Instead, there's no difference in the ordering before and after. NSSortDescriptor *descriptor = [[NSSortDescriptor alloc] initWithKey:@"imageID" ascending:YES]; NSSet *imageSet = collection.images; for (CBImage *image in imageSet) { NSLog(@"imageID in Set: %@",image.imageID); } NSArray *imageArray = [[imageSet allObjects] sortedArrayUsingDescriptors:(descriptor, nil)]; [descriptor release]; for (CBImage *image in imageArray) { NSLog(@"imageID in Array: %@",image.imageID); } Fwiw, CBImage is defined in my core data model. I don't know why sorting on managed objects would work any differently than on "regular" objects, but maybe it matters. As proof that @"imageID" should work as the key for the descriptor, here's what the two log loops above output for one of the sets I'm iterating through: 2010-05-05 00:49:52.876 Cover Browser[38678:207] imageID in Array: 360339 2010-05-05 00:49:52.876 Cover Browser[38678:207] imageID in Array: 360337 2010-05-05 00:49:52.877 Cover Browser[38678:207] imageID in Array: 360338 2010-05-05 00:49:52.878 Cover Browser[38678:207] imageID in Array: 360336 2010-05-05 00:49:52.879 Cover Browser[38678:207] imageID in Array: 360335 ... For extra credit, I'd love to get a general solution to troubleshooting NSSortDescriptor troubles (esp. if it also applies to troubleshooting NSPredicate). The functionality of these things seems totally opaque to me and consequently debugging takes forever.

    Read the article

  • Multiplication of 2 positive numbers giving a negative result

    - by krandiash
    My program is an implementation of a bloom filter. However, when I'm storing my hash function results in the bit array, the function (of the form f(i) = (a*i + b) % m where a,b,i,m are all positive integers) is giving me a negative result. The problem seems to be in the calculation of a*i which is coming out to be negative. Ignore the print statements in the code; those were for debugging. Basically, the value of temp in this block of code is coming out to be negative and so I'm getting an ArrayOutOfBoundsException. m is the bit array length, z is the number of hash functions being used, S is the set of values which are members of this bloom filter and H stores the values of a and b for the hash functions f1, f2, ..., fz. public static int[] makeBitArray(int m, int z, ArrayList<Integer> S, int[] H) { int[] C = new int[m]; for (int i = 0; i < z; i++) { for (int q = 0; q < S.size() ; q++) { System.out.println(H[2*i]); int temp = S.get(q)*(H[2*i]); System.out.println(temp); System.out.println(S.get(q)); System.out.println(H[2*i + 1]); System.out.println(m); int t = ((H[2*i]*S.get(q)) + H[2*i + 1])%m; System.out.println(t); C[t] = 1; } } return C; } Any help is appreciated.

    Read the article

  • server/ client server connection

    - by user312054
    I have a server side program that creates a listening server side socket. The problem occurring is that it seems as if the client side sends a connect request it gets rejected if the server side socket is listening but connects if the server side program is not running. I can see the server side program getting the client request when debugging. It seems as if the client cannot connect to a listening socket. Any suggestions on a resolution? The server side accept code snippet is this. void CSocketListen::OnAccept(int nErrorCode) { CSocket::OnAccept(nErrorCode); CSocketServer* SocketPtr = new CSocketServer(); if (Accept(*SocketPtr)) { // add to list of client sockets connected } else { delete SocketPtr; } The client side code connect is like this. SOCKET cellModem; sockaddr_in handHeld; handHeld.sin_family = AF_INET; //Address family handHeld.sin_addr.s_addr = inet_addr("127.0.0.1"); handHeld.sin_port = htons((u_short)1113); //port to use cellModem=socket(AF_INET,SOCK_STREAM,0); if(cellModem == INVALID_SOCKET) { // log socket failure return false; } else { // log socket success } if (connect(cellModem,(const struct sockaddr*)&handHeld, sizeof(handHeld)) != 0 ) { // log socket connection success } else { // log socket connection failure closesocket(cellModem); }

    Read the article

  • WCF Fails when using impersonation over 2 machine boundaries (3 machines)

    - by MrTortoise
    These scenarios work in their pieces. Its when i put it all together that it breaks. I have a WCF service using netTCP that uses impersonation to get the callers ID (role based security will be used at this level) on top of this is a WCF service using basicHTTP with TransportCredientialOnly which also uses impersonation I then have a client front end that connects to the basicHttp. the aim of the game is to return the clients username from the netTCP service at the bottom - so ultimatley i can use role based security here. each service is on a different machine - and each service works when you remove any calls they make to other services when you run a client for them both locally and remotley. IE the problem only manifests when you jump accross more than one machine boundary. IE the setup breaks when i connect each part together - but they work fine on their own. I also specify [OperationBehavior(Impersonation = ImpersonationOption.Required)] in the method and have IIS setup to only allow windows authentication (actually i have ananymous enabled still, but disabling makes no difference) This impersonation works fine in the scenario where i have a netTCP Service on Machine A with a client with a basicHttp service on machine B with a clinet for the basicHttp service also on machine B ... however if i move that client to any machine C i get the following error: The exception is 'The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:10:00'' the inner message is 'An existing connection was forcibly closed by the remote host' Am beginning to think this is more a network issue than config ... but then im grasping at straws ... the config files are as follows (heading from the client down to the netTCP layer) <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="basicHttpBindingEndpoint" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:10:00" sendTimeout="00:02:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://panrelease01/WCFTopWindowsTest/Service1.svc" binding="basicHttpBinding" bindingConfiguration="basicHttpBindingEndpoint" contract="ServiceReference1.IService1" name="basicHttpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour" /> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> the service for the client (basicHttp service and the client for the netTCP service) <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBindingEndpoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> <basicHttpBinding> <binding name="basicHttpWindows"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows"></transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="net.tcp://5d2x23j.panint.com/netTCPwindows/Service1.svc" binding="netTcpBinding" bindingConfiguration="netTcpBindingEndpoint" contract="ServiceReference1.IService1" name="netTcpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation" allowNtlm="true"/> </clientCredentials> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="WCFTopWindowsTest.Service1" behaviorConfiguration="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="basicHttpWindows" name ="basicHttpBindingEndpoint" contract ="WCFTopWindowsTest.IService1"> </endpoint> </service> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration> then finally the service for the netTCP layer <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <authentication mode="Windows"></authentication> <authorization> <allow roles="*"/> </authorization> <compilation debug="true" targetFramework="4.0" /> <identity impersonate="true" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTCPwindows"> <security mode="Transport"> <transport clientCredentialType="Windows"></transport> </security> </binding> </netTcpBinding> </bindings> <services> <service behaviorConfiguration="netTCPwindows.netTCPwindowsBehaviour" name="netTCPwindows.Service1"> <endpoint address="" bindingConfiguration="netTCPwindows" binding="netTcpBinding" name="netTcpBindingEndpoint" contract="netTCPwindows.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mextcp" binding="mexTcpBinding" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8721/test2" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="netTCPwindows.netTCPwindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="false" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration>

    Read the article

  • Displaying recntangles in game window with XNA

    - by blerh
    I want to divide my game grid into an array of rectangles. Each rectangle is 40x40 and there are 14 rectangles in every column, with a total of 25 columns. This covers a game area of 560x1000. This is the code I have set up to make the first column of rectangles on the game grid: Rectangle[] gameTiles = new Rectangle[15]; for (int i = 0; i <= 15; i++) { gameTiles[i] = new Rectangle(0, i * 40, 40, 40); } I'm pretty sure this works, but of course I cannot confirm it because rectangles do not render on the screen for me to physically see them. What I would like to do for debugging purposes is to render a border, or fill the rectangle with color so I can see it on the game itself, just to make sure this works. Is there a way to make this happen? Or any relatively simple way I can just make sure that this works? Thank you very much.

    Read the article

  • Does Java 6 open a default port for JMX remote connections?

    - by Bob Cross
    My specific question has to do with JMX as used in JDK 1.6: if I am running a Java process using JRE 1.6 with com.sun.management.jmxremote in the command line, does Java pick a default port for remote JMX connections? Backstory: I am currently trying to develop a procedure to give to a customer that will enable them to connect to one of our processes via JMX from a remote machine. The goal is to facillitate their remote debugging of a situation occurring on a real-time display console. Because of their service level agreement, they are strongly motivated to capture as much data as possible and, if the situation looks too complicated to fix quickly, to restart the display console and allow it to reconnect to the server-side. I am aware the I could run jconsole on JDK 1.6 processes and jvisualvm on post-JDK 1.6.7 processes given physical access to the console. However, because of the operational requirements and people problems involved, we are strongly motivated to grab the data that we need remotely and get them up and running again. EDIT: I am aware of the command line port property com.sun.management.jmxremote.port=portNum The question that I am trying to answer is, if you do not set that property at the command line, does Java pick another port for remote monitoring? If so, how could you determine what it might be?

    Read the article

  • Mixing RewriteRule and ProxyPass in Apache

    - by Taylor L
    I was working on debugging an issue today related to mixing mod_proxy and mod_rewrite together and I ended up having to use balancer://mycluster in the RewriteRule in order to stop receiving a 404 error from Apache. I have two questions: 1) Is there any other way to get the rewritten URL to go through the balancer without adding balancer://mycluster into the RewriteRule? 2) Is there a way to define all the parameters I defined in ProxyPass (stickysession=JSESSIONID|jsessionid scolonpathdelim=On lbmethod=bytraffic nofailover=Off) in either the <Proxy> or RewriteRule? I'm concerned the requests that match the new RewriteRule won't load balance in the same fashion as those that go through ProxyPass (like /app1/something.do)? Below are the relevant sections of the httpd.conf. I am using Apache 2.2. <Proxy balancer://mycluster> Order deny,allow Allow from all BalancerMember ajp://my.domain.com:8009 route=node1 BalancerMember ajp://my.domain.com:8009 route=node2 </Proxy> ProxyPass /app1 balancer://mycluster/app1 stickysession=JSESSIONID|jsessionid scolonpathdelim=On lbmethod=bytraffic nofailover=Off ProxyPassReverse /app1 ajp://my.domain.com:8009/app1 ... RewriteRule ^/static/cms/image/(.*)\.(.*) balancer://mycluster/app1/$1.$2 [P,L]

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • What is likely cause of Android runtime exception "No suitable Log implementation" related to loggin

    - by M.Bearden
    I am creating an Android app that includes a third party jar. That third party jar utilizes internal logging that is failing to initialize when I run the app, with this error: "org.apache.commons.logging.LogConfigurationException: No suitable Log implementation". The 3rd party jar appears to be using org.apache.commons.logging and to depend on log4j, specifically log4j-1.2.14.jar. I have packaged the log4j jar into the Android app. The third party jar was packaged with a log4j.xml configuration file, which I have tried packaging into the app as an XML resource (and also as a raw resource). The "No suitable Log implementation" error message is not very descriptive, and I have no immediate familiarity with Java logging. So I am looking for likely causes of the problem (what class or configuration resources might I be missing?) or for some debugging technique that will result in a different error message that is more explicit about the problem. I do not have access to source code for the 3rd party jar. Here is the exception stack trace. When I run the app, I get the following exception as soon as one of the third party jar classes attempts to initialize its internal logging. DEBUG/AndroidRuntime(15694): Shutting down VM WARN/dalvikvm(15694): threadid=3: thread exiting with uncaught exception (group=0x4001b180) ERROR/AndroidRuntime(15694): Uncaught handler: thread main exiting due to uncaught exception ERROR/AndroidRuntime(15694): java.lang.ExceptionInInitializerError ERROR/AndroidRuntime(15694): Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log implementation ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:842) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:601) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:333) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:307) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:645) ERROR/AndroidRuntime(15694): at org.apache.commons.configuration.ConfigurationFactory.<clinit>(ConfigurationFactory.java:77)

    Read the article

  • Assembly unavailable after Web.config change

    - by tags2k
    I'm using a custom framework that uses reflection to do a GetTypeByName(string fullName) on the fully-qualified type name that it gets from the database, to create an instance of said type and add it to the page, resulting in a standard modular kind of thing. GetTypeByName is a utility function of mine that simply iterates through Thread.GetDomain().GetAssemblies(), then performs an assembly.GetType(fullName) to find the relevant type. Obviously this result gets cached for future reference and speed. However, I'm experiencing some issues whereby if the web.config gets updated (and, in some scarier instances if the application pool gets recycled) then it will lose all knowledge of certain assemblies, resulting in the inability to render an instance of the module type. Debugging shows that the missing assembly literally does not exist in the current thread assemblies list. To get around this I added a second check which is a bit dirty but recurses through the /bin/ directory's DLLs and checks that each one exists in the assemblies list. If it doesn't, it loads it using Assembly.Load and fixing the context issue thanks to 'Solving the Assembly Load Context Problem'. This would work, only it seems that (and I'm aware this shouldn't be possible) some projects still have access to the missing assembly, for example my actual web project rather than the framework itself - and it then complains that duplicate references have been added! Has anyone ever heard of anything like this, or have any ideas why an assembly would simply drop out of existence on a config change? Short of a solution, what is the most elegant workaround to get all the assemblies in the bin to reload? It needs to be all in one "hit" so that the site visitors don't see any difference other than a small delay, so an app_offline.htm file is out of the question. Programatically renaming a DLL in the bin and then naming it back does work, but requires "modify" permissions for the IIS user account, which is insane. Thanks for any pointers the community can gather!

    Read the article

  • Data Flow Object Graph and An Execution Engine for it

    - by M Dotnet
    I would like to build a custom workflow engine from scratch. The input to this workflow is a data flow diagram which is composed of a series of activities connected together through lines where each each represent the data flow. Each activity can export multiple outputs. Activities are complex math functions but the logic is hidden from the user. My workflow engine job is to execute the given data flow diagram. Each activity within the data flow diagram is a custom activity and each activity can output different outputs. How do you suggest to model the data flow diagram object? I need to be able to construct the data flow diagram problematically (no need for drag and drop) but I need to display the final result graphically (for display and debugging purposes). Are there any libraries out there that I could use? Should I keep the workflow presentable as an xml? I know that there are many projects out there trying to essentially doing similar thing by building such workflow engines but I need something light weight and open source. I do not need any state machine execution engine and mine is primarily sequential workflow with fork and join capabilities. My activities are wrappable as basic C# classes and I do NOT want to use anything as heavy as .NET workflow foundation.

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Problem with using malloc in link lists (urgent ! help please)

    - by Abhinav
    I've been working on this program for five months now. Its a real time application of a sensor network. I create several link lists during the life of the program and Im using malloc for creating a new node in the link. What happens is that the program suddenly stops or goes crazy and restarts. Im using AVR and the microcontroller is ATMEGA 1281. After a lot of debugging I figured out that that the malloc is causing the problem. I do not free the memory after exiting the function that creates a new link so Im guessing that this is eventually causing the heap memory to overflow or something like that. Now if I use the free() function to deallocate the memory at the end of the function using malloc, the program just gets stuck when the control reaches free(). Is this because the memory becomes too clustered after calling free() ? I also create reference tables for example if 'head' is a new link list and I create another list called current and make it equal to head. table *head; table *current = head; After the end of the function if I use free free(current); current = NULL: Then the program gets stuck here. I dont know what to do. What am I doing wrong? Is there a way to increase the size of the heap memory Please help...

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >