Search Results

Search found 5202 results on 209 pages for 'char'.

Page 192/209 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • :contains for multiple words

    - by Emin
    I am using the following jQuery var etag='kate' if (etag.length > 0) { $('div').each(function () { $(this).find('ul:not(:contains(' + etag + '))').hide(); $(this).find('ul:contains(' + etag + ')').show(); }); }? towards the following HTML <div id="2"> <ul> <li>john</li> <li>jack</li> </ul> <ul> <li>kate</li> <li>clair</li> </ul> <ul> <li>hugo</li> <li>desmond</li> </ul> <ul> <li>said</li> <li>jacob</li> </ul> </div> <div id="3"> <ul> <li>jacob</li> <li>me</li> </ul> <ul> <li>desmond</li> <li>george</li> </ul> <ul> <li>allen</li> <li>kate</li> </ul> <ul> <li>salkldf</li> <li>3kl44</li> </ul> </div> basically, as long as etag has one word, the code works perfectly and hides those elements who do not contain etag. My problem is, when etag is multiple words (and I don't have control over it. Its coming from a database and could be combination of multiple words seperated with space char) then the code does not work.. is there any way to achieve this?

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • CUDA, more threads for same work = Longer run time despite better occupancy, Why?

    - by zenna
    I encountered a strange problem where increasing my occupancy by increasing the number of threads reduced performance. I created the following program to illustrate the problem: #include <stdio.h> #include <stdlib.h> #include <cuda_runtime.h> __global__ void less_threads(float * d_out) { int num_inliers; for (int j=0;j<800;++j) { //Do 12 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; num_inliers += threadIdx.x*5; num_inliers += threadIdx.x*6; num_inliers += threadIdx.x*7; num_inliers += threadIdx.x*8; num_inliers += threadIdx.x*9; num_inliers += threadIdx.x*10; num_inliers += threadIdx.x*11; num_inliers += threadIdx.x*12; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } __global__ void more_threads(float *d_out) { int num_inliers; for (int j=0;j<800;++j) { // Do 4 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } int main(int argc, char* argv[]) { float *d_out = NULL; cudaMalloc((void**)&d_out,sizeof(float)*25000); more_threads<<<780,128>>>(d_out); less_threads<<<780,32>>>(d_out); return 0; } Note both kernels should do the same amount of work in total, the (if threadIdx.x == -1 is a trick to stop the compiler optimising everything out and leaving an empty kernel). The work should be the same as more_threads is using 4 times as many threads but with each thread doing 4 times less work. Significant results form the profiler results are as followsL: more_threads: GPU runtime = 1474 us,reg per thread = 6,occupancy=1,branch=83746,divergent_branch = 26,instructions = 584065,gst request=1084552 less_threads: GPU runtime = 921 us,reg per thread = 14,occupancy=0.25,branch=20956,divergent_branch = 26,instructions = 312663,gst request=677381 As I said previously, the run time of the kernel using more threads is longer, this could be due to the increased number of instructions. Why are there more instructions? Why is there any branching, let alone divergent branching, considering there is no conditional code? Why are there any gst requests when there is no global memory access? What is going on here! Thanks

    Read the article

  • What is `objc_msgSend_fixup`, exactly?

    - by Luis Antonio Botelho O. Leite
    I'm messing around with the Objective-C runtime, trying to compile objective-c code without linking it against libobjc, and I'm having some segmentation fault problems with a program, so I generated an assembly file from it. I think it's not necessary to show the whole assembly file. At some point of my main function, I've got the following line (which, by the way, is the line after which I get the seg fault): callq *l_objc_msgSend_fixup_alloc and here is the definition for l_objc_msgSend_fixup_alloc: .hidden l_objc_msgSend_fixup_alloc # @"\01l_objc_msgSend_fixup_alloc" .type l_objc_msgSend_fixup_alloc,@object .section "__DATA, __objc_msgrefs, coalesced","aw",@progbits .weak l_objc_msgSend_fixup_alloc .align 16 l_objc_msgSend_fixup_alloc: .quad objc_msgSend_fixup .quad L_OBJC_METH_VAR_NAME_ .size l_objc_msgSend_fixup_alloc, 16 I've reimplemented objc_msgSend_fixup as a function (id objc_msgSend_fixup(id self, SEL op, ...)) which returns nil (just to see what happens), but this function isn't even being called (the program crashes before calling it). So, my question is, what is callq *l_objc_msgSend_fixup_alloc supposed to do and what is objc_msgSend_fixup (after l_objc_msgSend_fixup_alloc:) supposed to be (a function or an object)? Edit To better explain, I'm not linking my source file against the objc library. What I'm trying to do is implement some parts of the libray, just to see how it works. Here is an approach of what I've done: #include <stdio.h> #include <objc/runtime.h> @interface MyClass { } +(id) alloc; @end @implementation MyClass +(id) alloc { // alloc the object return nil; } @end id objc_msgSend_fixup(id self, SEL op, ...) { printf("Calling objc_msgSend_fixup()...\n"); // looks for the method implementation for SEL in self's vtable return nil; // Since this is just a test, this function doesn't need to do that } int main(int argc, char *argv[]) { MyClass *m; m = [MyClass alloc]; // At this point, according to the assembly code generated // objc_msgSend_fixup should be called. So, the program should, at least, print // "Calling objc_msgSend_fixup()..." on the screen, but it crashes before // objc_msgSend_fixup() is called... return 0; } If the runtime needs to access the object's vtable to find the correct method to call, what is the function which actually does this? I think it is objc_msgSend_fixup, in this case. So, when objc_msgSend_fixup is called, it receives an object as one of its parameters, and, if this object hasn't been initialized, the function fails. So, I've implemented my own version of objc_msgSend_fixup. According to the assembly source above, it should be called. It doesn't matter if the function is actually looking for the implementation of the selector passed as parameter. I just want objc_msgSend_lookup to be called. But, it's not being called, that is, the function that looks for the object's data is not even being called, instead of being called and cause a fault (because it returns a nil (which, by the way, doesn't matter)). The program seg fails before objc_msgSend_lookup is called...

    Read the article

  • Creating keystore for jarsigner programmatically

    - by skayred
    I'm trying to generate keystore with certificate to use it with JarSigner. Here is my code: System.out.println("Keystore generation..."); Security.addProvider(new BouncyCastleProvider()); String domainName = "example.org"; KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA"); SecureRandom random = SecureRandom.getInstance("SHA1PRNG", "SUN"); keyGen.initialize(1024, random); KeyPair pair = keyGen.generateKeyPair(); X509V3CertificateGenerator v3CertGen = new X509V3CertificateGenerator(); int serial = new SecureRandom().nextInt(); v3CertGen.setSerialNumber(BigInteger.valueOf(serial < 0 ? -1 * serial : serial)); v3CertGen.setIssuerDN(new X509Principal("CN=" + domainName + ", OU=None, O=None L=None, C=None")); v3CertGen.setNotBefore(new Date(System.currentTimeMillis() - 1000L * 60 * 60 * 24 * 30)); v3CertGen.setNotAfter(new Date(System.currentTimeMillis() + (1000L * 60 * 60 * 24 * 365*10))); v3CertGen.setSubjectDN(new X509Principal("CN=" + domainName + ", OU=None, O=None L=None, C=None")); v3CertGen.setPublicKey(pair.getPublic()); v3CertGen.setSignatureAlgorithm("MD5WithRSAEncryption"); X509Certificate PKCertificate = v3CertGen.generateX509Certificate(pair.getPrivate()); FileOutputStream fos = new FileOutputStream("/Users/dmitrysavchenko/testCert.cert"); fos.write(PKCertificate.getEncoded()); fos.close(); KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); char[] password = "123".toCharArray(); ks.load(null, password); ks.setCertificateEntry("hive", PKCertificate); fos = new FileOutputStream("/Users/dmitrysavchenko/hive-keystore.pkcs12"); ks.store(fos, password); fos.close(); It works, but when I'm trying to sign my JAR with this keystore, I get the following error: jarsigner: Certificate chain not found for: hive. hive must reference a valid KeyStore key entry containing a private key and corresponding public key certificate chain. I've discovered that there must be a private key, but I don't know how to add it to certificate. Can you help me?

    Read the article

  • compiling a program to run in DOS mode

    - by dygi
    I write a simple program, to run in DOS mode. Everything works under emulated console in Win XP / Vista / Seven, but not in DOS. The error says: this program caonnot be run in DOS mode. I wonder is that a problem with compiler flags or something bigger. For programming i use Code::Blocks v 8.02 with such settings for compilation: -Wall -W -pedantic -pedantic-errors in Project \ Build options \ Compiler settings I've tried a clean DOS mode, booting from cd, and also setting up DOS in Virtual Machine. The same error appears. Should i turn on some more compiler flags ? Some specific 386 / 486 optimizations ? UPDATE Ok, i've downloaded, installed and configured DJGPP. Even resolved some problems with libs and includes. Still have two questions. 1) i can't compile a code, which calls _strdate and _strtime, i've double checked the includes, as MSDN says it needs time.h, but still error says: _strdate was not declared in this scope, i even tried to add std::_strdate, but then i have 4, not 2 errors sazing the same 2) the 2nd code is about gotoxy, it looks like that: #include <windows.h> void gotoxy(int x, int y) { COORD position; position.X = x; position.Y = y; SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), position); } error says there is no windows.h, so i've put it in place, but then there are many more errors saying some is missing from windows.h, I SUPPOSE it won't work because this functions is strictly for windows right ? is there any way to write similar gotoxy for DOS ? UPDATE2 1) solved using time(); instead of _strdate(); and _strtime(); here's the code time_t rawtime; struct tm * timeinfo; char buffer [20]; time ( &rawtime ); timeinfo = localtime ( &rawtime ); strftime (buffer,80,"%Y.%m.%d %H:%M:%S\0",timeinfo); string myTime(buffer); It now compiles under DJGPP. UPDATE3 Still need to solve a code using gotoxy - replaced it with some other code that compiles (under DJGPP). Thank You all for help. Just learnt some new things about compiling (flags, old IDE's like DJGPP, OpenWatcom) and refreshed memories setting DOS to work :--)

    Read the article

  • Writing Device Drivers for Microcontrollers, where to define IO Port pins?

    - by volting
    I always seem to encounter this dilemma when writing low level code for MCU's. I never know where to declare pin definitions so as to make the code as reusable as possible. In this case Im writing a driver to interface an 8051 to a MCP4922 12bit serial DAC. Im unsure how/where I should declare the pin definitions for The CS(chip select) and LDAC(data latch) for the DAC. At the moment there declared in the header file for the driver. Iv done a lot of research trying to figure out the best approach but havent really found anything. Im basically want to know what the best practices... if there are some books worth reading or online information, examples etc, any recommendations would be welcome. Just a snippet of the driver so you get the idea /** @brief This function is used to write a 16bit data word to DAC B -12 data bit plus 4 configuration bits @param dac_data A 12bit word @param ip_buf_unbuf_select Input Buffered/unbuffered select bit. Buffered = 1; Unbuffered = 0 @param gain_select Output Gain Selection bit. 1 = 1x (VOUT = VREF * D/4096). 0 =2x (VOUT = 2 * VREF * D/4096) */ void MCP4922_DAC_B_TX_word(unsigned short int dac_data, bit ip_buf_unbuf_select, bit gain_select) { unsigned char low_byte=0, high_byte=0; CS = 0; /**Select the chip*/ high_byte |= ((0x01 << 7) | (0x01 << 4)); /**Set bit to select DAC A and Set SHDN bit high for DAC A active operation*/ if(ip_buf_unbuf_select) high_byte |= (0x01 << 6); if(gain_select) high_byte |= (0x01 << 5); high_byte |= ((dac_data >> 8) & 0x0F); low_byte |= dac_data; SPI_master_byte(high_byte); SPI_master_byte(low_byte); CS = 1; LDAC = 0; /**Latch the Data*/ LDAC = 1; }

    Read the article

  • Getting RSSIValue from IOBluetoothHostController

    - by Tanner Ezell
    I'm trying to write a simple application that gathers the RSSIValue and displays it via NSLog, my code is as follows: #import <Foundation/Foundation.h> #import <Cocoa/Cocoa.h> #import <IOBluetooth/objc/IOBluetoothDeviceInquiry.h> #import <IOBluetooth/objc/IOBluetoothDevice.h> #import <IOBluetooth/objc/IOBluetoothHostController.h> #import <IOBluetooth/IOBluetoothUtilities.h> @interface getRSSI: NSObject {} -(void) readRSSIForDeviceComplete:(id)controller device:(IOBluetoothDevice*)device info:(BluetoothHCIRSSIInfo*)info error:(IOReturn)error; @end @implementation getRSSI - (void) readRSSIForDeviceComplete:(id)controller device:(IOBluetoothDevice*)device info:(BluetoothHCIRSSIInfo*)info error:(IOReturn)error { if (error != kIOReturnSuccess) { NSLog(@"readRSSIForDeviceComplete return error"); CFRunLoopStop(CFRunLoopGetCurrent()); } if (info->handle == kBluetoothConnectionHandleNone) { NSLog(@"readRSSIForDeviceComplete no handle"); CFRunLoopStop(CFRunLoopGetCurrent()); } NSLog(@"RSSI = %i dBm ", info->RSSIValue); [NSThread sleepUntilDate: [NSDate dateWithTimeIntervalSinceNow: 5]]; [device closeConnection]; [device openConnection]; [controller readRSSIForDevice:device]; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSLog(@"start"); IOBluetoothHostController *hci = [IOBluetoothHostController defaultController]; NSString *addrStr = @"xx:xx:xx:xx:xx:xx"; BluetoothDeviceAddress addr; IOBluetoothNSStringToDeviceAddress(addrStr, &addr); IOBluetoothDevice *device = [[IOBluetoothDevice alloc] init]; device = [IOBluetoothDevice withAddress:&addr]; [device retain]; [device openConnection]; getRSSI *rssi = [[getRSSI alloc] init]; [hci setDelegate:rssi]; [hci readRSSIForDevice:device]; CFRunLoopRun(); [hci release]; [rssi release]; [pool release]; return 0; } The problem I am facing is that the readRSSIForDeviceComplete seems to work just fine, info passes along a value. The problem is that the RSSI value is drastically different from the one I can view from OS X via option clicking the bluetooth icon at the top. It is typical for my application to print off 1,2,-1,-8,etc while the menu displays -64 dBm, -66, -70, -42, etc. I would really appreciate some guidance.

    Read the article

  • Is there a standard way to encode a .NET string into javascript string for use in MS AJAX?

    - by Rich Andrews
    I'm trying to pass the output of a SQL Server exception to the client using the RegisterStartUpScript method of the MS ScriptManager. This works fine for some errors but when the exception contains single quotes the alert fails. I dont want to only escape single quotes though - Is there a standard function i can call to escape any special chars for use in Javascript? string scriptstring = "alert('" + ex.Message + "');"; ScriptManager.RegisterStartupScript(this, this.GetType(), "Alert", scriptstring , true); Thanks tpeczek, the code almost worked for me :) but with a slight amendment (the escaping of single quotes) it works a treat. I've included my amended version here... public class JSEncode { /// <summary> /// Encodes a string to be represented as a string literal. The format /// is essentially a JSON string. /// /// The string returned includes outer quotes /// Example Output: "Hello \"Rick\"!\r\nRock on" /// </summary> /// <param name="s"></param> /// <returns></returns> public static string EncodeJsString(string s) { StringBuilder sb = new StringBuilder(); sb.Append("\""); foreach (char c in s) { switch (c) { case '\'': sb.Append("\\\'"); break; case '\"': sb.Append("\\\""); break; case '\\': sb.Append("\\\\"); break; case '\b': sb.Append("\\b"); break; case '\f': sb.Append("\\f"); break; case '\n': sb.Append("\\n"); break; case '\r': sb.Append("\\r"); break; case '\t': sb.Append("\\t"); break; default: int i = (int)c; if (i < 32 || i > 127) { sb.AppendFormat("\\u{0:X04}", i); } else { sb.Append(c); } break; } } sb.Append("\""); return sb.ToString(); } } As mentioned below - original source: here

    Read the article

  • LsaAddAccountRights not working for me

    - by SteveL
    Using: Delphi 2010 and the JEDI Windows API and JWSCL I am trying to assign the Logon As A Service privilege to a user using LsaAddAccountRights function but it does not work ie. after the function returns, checking in Group Policy Editor shows that the user still does not have the above mentioned privilege. I'm running the application on Windows XP. Would be glad if someone could point out what is wrong in my code: unit Unit1; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls, JwaWindows, JwsclSid; type TForm1 = class(TForm) Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form1: TForm1; implementation {$R *.dfm} function AddPrivilegeToAccount(AAccountName, APrivilege: String): DWORD; var lStatus: TNTStatus; lObjectAttributes: TLsaObjectAttributes; lPolicyHandle: TLsaHandle; lPrivilege: TLsaUnicodeString; lSid: PSID; lSidLen: DWORD; lTmpDomain: String; lTmpDomainLen: DWORD; lTmpSidNameUse: TSidNameUse; lPrivilegeWStr: String; begin ZeroMemory(@lObjectAttributes, SizeOf(lObjectAttributes)); lStatus := LsaOpenPolicy(nil, lObjectAttributes, POLICY_LOOKUP_NAMES, lPolicyHandle); if lStatus <> STATUS_SUCCESS then begin Result := LsaNtStatusToWinError(lStatus); Exit; end; try lTmpDomainLen := DNLEN; // In 'clear code' this should be get by LookupAccountName SetLength(lTmpDomain, lTmpDomainLen); lSidLen := SECURITY_MAX_SID_SIZE; GetMem(lSid, lSidLen); try if LookupAccountName(nil, PChar(AAccountName), lSid, lSidLen, PChar(lTmpDomain), lTmpDomainLen, lTmpSidNameUse) then begin lPrivilegeWStr := APrivilege; lPrivilege.Buffer := PChar(lPrivilegeWStr); lPrivilege.Length := Length(lPrivilegeWStr) * SizeOf(Char); lPrivilege.MaximumLength := lPrivilege.Length; lStatus := LsaAddAccountRights(lPolicyHandle, lSid, @lPrivilege, 1); Result := LsaNtStatusToWinError(lStatus); end else Result := GetLastError; finally FreeMem(lSid); end; finally LsaClose(lPolicyHandle); end; end; procedure TForm1.Button1Click(Sender: TObject); begin AddPrivilegeToAccount('Sam', 'SeServiceLogonRight'); end; end. Thanks in advance.

    Read the article

  • Powershell $LastExitCode=0 but $?=False . Redirecting stderr to stdout gives NativeCommandError

    - by Colonel Panic
    Can anyone explain Powershell's surprising behaviour in the second example below? First, a example of sane behaviour: PS C:\> & cmd /c "echo Hello from standard error 1>&2"; echo "`$LastExitCode=$LastExitCode and `$?=$?" Hello from standard error $LastExitCode=0 and $?=True No surprises. I print a message to standard error (using cmd's echo). I inspect the variables $? and $LastExitCode. They equal to True and 0 respectively, as expected. However, if I ask Powershell to redirect standard error to standard output over the first command, I get a NativeCommandError: PS C:\> & cmd /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" cmd.exe : Hello from standard error At line:1 char:4 + cmd <<<< /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" + CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError $LastExitCode=0 and $?=False My first question, why the NativeCommandError ? Secondly, why is $? False when cmd ran successfully and $LastExitCode is 0? Powershell's docs about_Automatic_Variables don't explicitly define $?. I always supposed it is True if and only if $LastExitCode is 0 but my example contradicts that. Here's how I came across this behaviour in the real-world (simplified). It really is FUBAR. I was calling one Powershell script from another. The inner script: cmd /c "echo Hello from standard error 1>&2" if (! $?) { echo "Job failed. Sending email.." exit 1 } # do something else Running this simply .\job.ps1, it works fine, no email is sent. However, I was calling it from another Powershell script, logging to a file .\job.ps1 2>&1 > log.txt. In this case, an email is sent! Here, the act of observing a phenomenon changes its outcome. This feels like quantum physics rather than scripting! [Interestingly: .\job.ps1 2>&1 may or not blow up depending on where you run it]

    Read the article

  • find window text and save txt to file named that wont work.

    - by blood
    hi, my code wont work and idk why. the point of my code is to find the top window and save a text file with the name the same as the text on the top menu bar (task bar i think?). then save some data to that text file. but everytime i try to use it the write fails if i set the name of the text file before hand so it wont change it will write the data to the file. but if i don't set it before hand it will make the text doc but not write anything to it. or sometimes it will just write numbers for the name (i think it's the handle number) then it will write the data. :\ it's odd can anyone help? #include <iostream> #include <windows.h> #include <fstream> #include <string> #include <sstream> #include <time.h> using namespace std; string header_str = ("NULL"); #define DTTMFMT "%Y-%m-%d %H:%M:%S " #define DTTMSZ 21 char buff[DTTMSZ]; fstream filestr; string ff = ("C:\\System logs\\txst.txt"); TCHAR buf[255]; int main() { GetWindowText(GetForegroundWindow(), buf, 255); stringstream header(stringstream::in | stringstream::out); header.flush(); header << ("C:\\System logs\\"); header << buf; header << (".txt"); header_str = header.str(); ff = header_str; cout << header_str << "\n"; filestr.open (ff.c_str(), fstream::in | fstream::out | fstream::app | ios_base::binary | ios_base::out); filestr << "dfg"; filestr.close(); Sleep(10000); return 0; }

    Read the article

  • Inserting newlines into a GtkTextView widget (GTK+ programming)

    - by Mark Roberts
    I've got a button which when clicked copies and appends the text from a GtkEntry widget into a GtkTextView widget. (This code is a modified version of an example found in the "The Text View Widget" chapter of Foundations of GTK+ Development.) I'm looking to insert a newline character before the text which gets copied and appended, such that each line of text will be on its own line in the GtkTextView widget. How would I do this? I'm brand new to GTK+. Here's the code sample: #include <gtk/gtk.h> typedef struct { GtkWidget *entry, *textview; } Widgets; static void insert_text (GtkButton*, Widgets*); int main (int argc, char *argv[]) { GtkWidget *window, *scrolled_win, *hbox, *vbox, *insert; Widgets *w = g_slice_new (Widgets); gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_window_set_title (GTK_WINDOW (window), "Text Iterators"); gtk_container_set_border_width (GTK_CONTAINER (window), 10); gtk_widget_set_size_request (window, -1, 200); w->textview = gtk_text_view_new (); w->entry = gtk_entry_new (); insert = gtk_button_new_with_label ("Insert Text"); g_signal_connect (G_OBJECT (insert), "clicked", G_CALLBACK (insert_text), (gpointer) w); scrolled_win = gtk_scrolled_window_new (NULL, NULL); gtk_container_add (GTK_CONTAINER (scrolled_win), w->textview); hbox = gtk_hbox_new (FALSE, 5); gtk_box_pack_start_defaults (GTK_BOX (hbox), w->entry); gtk_box_pack_start_defaults (GTK_BOX (hbox), insert); vbox = gtk_vbox_new (FALSE, 5); gtk_box_pack_start (GTK_BOX (vbox), scrolled_win, TRUE, TRUE, 0); gtk_box_pack_start (GTK_BOX (vbox), hbox, FALSE, TRUE, 0); gtk_container_add (GTK_CONTAINER (window), vbox); gtk_widget_show_all (window); gtk_main(); return 0; } /* Insert the text from the GtkEntry into the GtkTextView. */ static void insert_text (GtkButton *button, Widgets *w) { GtkTextBuffer *buffer; GtkTextMark *mark; GtkTextIter iter; const gchar *text; buffer = gtk_text_view_get_buffer (GTK_TEXT_VIEW (w->textview)); text = gtk_entry_get_text (GTK_ENTRY (w->entry)); mark = gtk_text_buffer_get_insert (buffer); gtk_text_buffer_get_iter_at_mark (buffer, &iter, mark); gtk_text_buffer_insert (buffer, &iter, text, -1); } You can compile this command (assuming the file is named file.c): gcc file.c -o file `pkg-config --cflags --libs gtk+-2.0` Thanks everybody!

    Read the article

  • How to access and work with XML from API in C#

    - by Jarek
    My goal is to pull XML data from the API and load it to a sql server database. The frist step I'm attempting here is to access the data and display it. Once I get this to work I'll loop through each row and insert the values into a sql server database. When I try to run the code below nothing happens and when I paste the url directly into the browser I get this error "2010-03-08 04:24:17 Wallet exhausted: retry after 2010-03-08 05:23:58. 2010-03-08 05:23:58" To me it seems that every iteration of the foreach loop makes a call to the site and I get blocked for an hour. Am I retrieving data from the API in an incorrect manner? Is there some way to load the data into memory or an array then loop through that? Here's the bit of code I hacked together. using System; using System.Data.SqlClient; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Xml; using System.Data; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { string userID = "123"; string apiKey = "abc456"; string characterID = "789"; string url = "http://api.eve-online.com/char/WalletTransactions.xml.aspx?userID=" + userID + "&apiKey=" + apiKey + "&characterID=" + characterID; XmlDocument xmldoc = new XmlDocument(); xmldoc.Load(url); XmlNamespaceManager xnm1 = new XmlNamespaceManager(xmldoc.NameTable); XmlNodeList nList1 = xmldoc.SelectNodes("result/rowset/row", xnm1); foreach (XmlNode xNode in nList1) { Response.Write(xNode.InnerXml + "<br />"); } } catch (SqlException em) { Response.Write(em.Message); } } } Here's a sample of the xml <eveapi version="2"> <currentTime>2010-03-06 17:38:35</currentTime> <result> <rowset name="transactions" key="transactionID" columns="transactionDateTime,transactionID,quantity,typeName,typeID,price,clientID,clientName,stationID,stationName,transactionType,transactionFor"> <row transactionDateTime="2010-03-06 17:16:00" transactionID="1343566007" quantity="1" typeName="Co-Processor II" typeID="3888" price="1122999.00" clientID="1404318579" clientName="unseenstrike" stationID="60011572" stationName="Osmeden IX - Moon 6 - University of Caille School" transactionType="sell" transactionFor="personal" /> <row transactionDateTime="2010-03-06 17:15:00" transactionID="1343565894" quantity="1" typeName="Co-Processor II" typeID="3888" price="1150000.00" clientID="1404318579" clientName="unseenstrike" stationID="60011572" stationName="Osmeden IX - Moon 6 - University of Caille School" transactionType="sell" transactionFor="personal" /> </rowset> </result> <cachedUntil>2010-03-06 17:53:35</cachedUntil> </eveapi>

    Read the article

  • Multiline editable textarea in SVG

    - by Timo
    I'm trying to implement multiline editable textfield in SVG. I have the following code in http://jsfiddle.net/ca4d3/ : <svg width="1000" height="1000" overflow="scroll"> <g transform="rotate(5)"> <rect width="300" height="400" fill="#22DD22" fill-opacity="0.5"/> </g> <foreignObject x="10" y="10" overflow="visible" width="10000" height="10000" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"> <p style="display:table-cell;padding:10px;border:1px solid red; background-color:white;opacity:0.5;font-family:Verdana; font-size:20px;white-space: pre; word-wrap: normal; overflow: visible; overflow-y: visible; overflow-x:visible;" contentEditable="true" xmlns="http://www.w3.org/1999/xhtml"> Write here some text. Be smart and select some word. If you wanna be really COOL, paste here something cool! </p> </foreignObject> </svg> In newest Chrome, Safari and Firefox the code works in some way, but in Opera and IE 9 not. The goal is that: 0) Works in newest Chrome, Safari, Firefox, Opera and IE and if ever possible in some pads. 1) White-spaces are preserved and text wraps only on newline char (works in Chrome, Safari and Firefox, but not in Opera and IE 9 *). 2) The textfield is editable (in the same reliable and stabile way as textareas and contenteditable p elements in html) and height and width is expanded to fit text (works in Chrome, Safari and Firefox, but not in Opera and IE 9 *). 3) Texfield can be transformed (rotated, skewed, translated) while maintaining text editability (Tested rotation, but not work in any browser *). EDIT: Foreignobject rotation works on Firefox 15.0.1, but not in Safari 5.1.7 (6534.57.2), Chrome 22.0.1229.79, Opera 12.02, IE 9. Tested on Mac OS X 10.6.8. 4) Textfield can be clipped and masked while not necessarily maintaining text editability (not yet tested). *) using above code These all can be achieved using Flash, but Flash has so severe problems that it is not suitable for my purposes (after every little change in code, all have to be compiled again using Flex, which is slow, font size has limits, tracking technique is pixeloriented, not relative to em size etc.) and there still are differences across platforms. And I want to give a try to SVG! GUESTION: Can I achieve my goals 0-4 with current SVG support in browsers? Is coming SVG 2.0 for some help in this case? EDIT: Changed display:table to display:table-cell (and added new jsfiddle), because display:table made the field to loses focus when pressed arrow-up on first text row.

    Read the article

  • Possible bug with tabified QDockWidget and setFloating()

    - by krunk
    I've run into some odd behavior with tabified QDockWidgets, below is an example program with comments that demonstrates the behavior. Is this a bug or is it expected behavior and I'm missing some nuance in QDockWidget that causes this? Directly, since this does not work, how would one properly "undock" a hidden QDockWidget then display it? #include <QApplication> #include <QMainWindow> #include <QAction> #include <QDockWidget> #include <QMenu> #include <QSize> #include <QMenuBar> using namespace std; int main (int argc, char* argv[]) { QApplication app(argc, argv); QMainWindow window; QDockWidget dock1(&window); QDockWidget dock2(&window); QMenu menu("View"); dock1.setAllowedAreas(Qt::LeftDockWidgetArea | Qt::RightDockWidgetArea); dock2.setAllowedAreas(Qt::LeftDockWidgetArea | Qt::RightDockWidgetArea); dock1.setWindowTitle("Dock One"); dock2.setWindowTitle("Dock Two"); window.addDockWidget(Qt::RightDockWidgetArea, &dock1); window.addDockWidget(Qt::RightDockWidgetArea, &dock2); window.menuBar()->addMenu(&menu); window.setMinimumSize(QSize(800, 600)); window.tabifyDockWidget(&dock1, &dock2); dock1.hide(); dock2.hide(); menu.addAction(dock1.toggleViewAction()); menu.addAction(dock2.toggleViewAction()); window.show(); // Below is where the oddness starts. It seems to only exhibit the // behavior if the dock widgets are tabified. // Odd behavior here // This does not work. the window never shows, though its menu action shows // checked. Not only does this window not show up, but all toggle actions // for all dock windows (e.g. dock1 and dock2) are broken for the duration // of the application loop. // dock1.setFloating(true); // dock1.show(); // This does work. . . of course only if you do _not_ run the above first. // however, you can often get a little lag or "blip" in the rendering as // the dock is shown docked before setFloating is set to true. dock1.show(); dock1.setFloating(true); return app.exec(); }

    Read the article

  • EXC_BAD_ACCESS when I change moviePlayer contentURL

    - by Bruno
    Hello, In few words, my application is doing that : 1) My main view (MovieListController) has some video thumbnails and when I tap on one, it displays the moviePlayer (MoviePlayerViewController) : MovieListController.h : @interface MoviePlayerViewController : UIViewController <UITableViewDelegate>{ UIView *viewForMovie; MPMoviePlayerController *player; } @property (nonatomic, retain) IBOutlet UIView *viewForMovie; @property (nonatomic, retain) MPMoviePlayerController *player; - (NSURL *)movieURL; @end MovieListController.m : MoviePlayerViewController *controllerTV = [[MoviePlayerViewController alloc] initWithNibName:@"MoviePlayerViewController" bundle:nil]; controllerTV.delegate = self; controllerTV.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController: controllerTV animated: YES]; [controllerTV release]; 2) In my moviePlayer, I initialize the video I want to play MoviePlayerViewController.m : @implementation MoviePlayerViewController @synthesize player; @synthesize viewForMovie; - (void)viewDidLoad { NSLog(@"start"); [super viewDidLoad]; self.player = [[MPMoviePlayerController alloc] init]; self.player.view.frame = self.viewForMovie.bounds; self.player.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; [self.viewForMovie addSubview:player.view]; self.player.contentURL = [self movieURL]; } - (void)dealloc { NSLog(@"dealloc TV"); [player release]; [viewForMovie release]; [super dealloc]; } -(NSURL *)movieURL { NSBundle *bundle = [NSBundle mainBundle]; NSString *moviePath = [bundle pathForResource:@"FR_Tribord_Surf camp_100204" ofType:@"mp4"]; if (moviePath) { return [NSURL fileURLWithPath:moviePath]; } else { return nil; } } - It's working good, my movie is display My problem : When I go back to my main view : - (void) returnToMap: (MoviePlayerViewController *) controller { [self dismissModalViewControllerAnimated: YES]; } And I tap in a thumbnail to display again the moviePlayer (MoviePlayerViewController), I get a *Program received signal: “EXC_BAD_ACCESS”.* In my debugger I saw that it's stopping on the thread "main" : // // main.m // MoviePlayer // // Created by Eric Freeman on 3/27/10. // Copyright Apple Inc 2010. All rights reserved. // #import <UIKit/UIKit.h> int main(int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, nil); //EXC_BAD_ACCESS [pool release]; return retVal; } If I comment self.player.contentURL = [self movieURL]; it's working, but when I let it, iI have this problem. I read that it's due to null pointer or memory problem but I don't understand why it's working the first time and not the second time. I release my object in dealloc method. Thanks for your help ! Bruno.

    Read the article

  • Turning temporary stringstream to c_str() in single statement

    - by AshleysBrain
    Consider the following function: void f(const char* str); Suppose I want to generate a string using stringstream and pass it to this function. If I want to do it in one statement, I might try: f((std::ostringstream() << "Value: " << 5).str().c_str()); // error This gives an error: 'str()' is not a member of 'basic_ostream'. OK, so operator<< is returning ostream instead of ostringstream - how about casting it back to an ostringstream? 1) Is this cast safe? f(static_cast<std::ostringstream&>(std::ostringstream() << "Value: " << 5).str().c_str()); // incorrect output Now with this, it turns out for the operator<<("Value: ") call, it's actually calling ostream's operator<<(void*) and printing a hex address. This is wrong, I want the text. 2) Why does operator<< on the temporary std::ostringstream() call the ostream operator? Surely the temporary has a type of 'ostringstream' not 'ostream'? I can cast the temporary to force the correct operator call too! f(static_cast<std::ostringstream&>(static_cast<std::ostringstream&>(std::ostringstream()) << "Value: " << 5).str().c_str()); This appears to work and passes "Value: 5" to f(). 3) Am I relying on undefined behavior now? The casts look unusual. I'm aware the best alternative is something like this: std::ostringstream ss; ss << "Value: " << 5; f(ss.str().c_str()); ...but I'm interested in the behavior of doing it in one line. Suppose someone wanted to make a (dubious) macro: #define make_temporary_cstr(x) (static_cast<std::ostringstream&>(static_cast<std::ostringstream&>(std::ostringstream()) << x).str().c_str()) // ... f(make_temporary_cstr("Value: " << 5)); Would this function as expected?

    Read the article

  • QValidator for hex input

    - by Evan Teran
    I have a Qt widget which should only accept a hex string as input. It is very simple to restrict the input characters to [0-9A-Fa-f], but I would like to have it display with a delimiter between "bytes" so for example if the delimiter is a space, and the user types 0011223344 I would like the line edit to display 00 11 22 33 44 Now if the user presses the backspace key 3 times, then I want it to display 00 11 22 3. I almost have what i want, so far there is only one subtle bug involving using the delete key to remove a delimiter. Does anyone have a better way to implement this validator? Here's my code so far: class HexStringValidator : public QValidator { public: HexStringValidator(QObject * parent) : QValidator(parent) {} public: virtual void fixup(QString &input) const { QString temp; int index = 0; // every 2 digits insert a space if they didn't explicitly type one Q_FOREACH(QChar ch, input) { if(std::isxdigit(ch.toAscii())) { if(index != 0 && (index & 1) == 0) { temp += ' '; } temp += ch.toUpper(); ++index; } } input = temp; } virtual State validate(QString &input, int &pos) const { if(!input.isEmpty()) { // TODO: can we detect if the char which was JUST deleted // (if any was deleted) was a space? and special case this? // as to not have the bug in this case? const int char_pos = pos - input.left(pos).count(' '); int chars = 0; fixup(input); pos = 0; while(chars != char_pos) { if(input[pos] != ' ') { ++chars; } ++pos; } // favor the right side of a space if(input[pos] == ' ') { ++pos; } } return QValidator::Acceptable; } }; For now this code is functional enough, but I'd love to have it work 100% as expected. Obviously the ideal would be the just separate the display of the hex string from the actual characters stored in the QLineEdit's internal buffer but I have no idea where to start with that and I imagine is a non-trivial undertaking. In essence, I would like to have a Validator which conforms to this regex: "[0-9A-Fa-f]( [0-9A-Fa-f])*" but I don't want the user to ever have to type a space as delimiter. Likewise, when editing what they types, the spaces should be managed implicitly.

    Read the article

  • Blackberry ListField Text Wrapping - only two lines.

    - by Diego Tori
    Within my ListField, I want to be able to take any given long String, and just be able to wrap the first line within the width of the screen, and just take the remaining string and display it below and ellipsis the rest. Right now, this is what I'm using to detect wrapping within my draw paint call: int totalWidth = 0; int charWidth = 0; int lastIndex = 0; int spaceIndex = 0; int lineIndex = 0; String firstLine = ""; String secondLine = ""; boolean isSecondLine = false; for (int i = 0; i < longString.length(); i++){ charWidth = Font.getDefault().getAdvance(String.valueOf(longString.charAt(i))); //System.out.println("char width: " + charWidth); if(longString.charAt(i) == ' ') spaceIndex = i; if((charWidth + totalWidth) > (this.getWidth()-32)){ //g.drawText(longString.substring(lastIndex, spaceIndex), xpos, y +_padding, DrawStyle.LEFT, w - xpos); lineIndex++; System.out.println("current lines to draw: " + lineIndex); /*if (lineIndex = 2){ int idx = i; System.out.println("first line " + longString.substring(lastIndex, spaceIndex)); System.out.println("second line " + longString.substring(spaceIndex+1, longString.length())); }*/ //firstLine = longString.substring(lastIndex, spaceIndex); firstLine = longString.substring(0, spaceIndex); //System.out.println("first new line: " +firstLine); //isSecondLine=true; //xpos = 0; //y += Font.getDefault().getHeight(); i = spaceIndex + 1; lastIndex = i; System.out.println("Rest of string: " + longString.substring(lastIndex, longString.length())); charWidth = 0; totalWidth = 0; } totalWidth += charWidth; System.out.println("total width: " + totalWidth); //g.drawText(longString.substring(lastIndex, i+1), xpos, y + (_padding*3)+4, DrawStyle.ELLIPSIS, w - xpos); //secondLine = longString.substring(lastIndex, i+1); secondLine = longString.substring(lastIndex, longString.length()); //isSecondLine = true; } Now this does a great job of actually wrapping any given string (assuming the y values were properly offsetted and it only drew the text after the string width exceeded the screen width, as well as the remaining string afterwards), however, every time I try to get the first two lines, it always ends up returning the last two lines of the string if it goes beyond two lines. Is there a better way to do this sort of thing, since I am fresh out of ideas?

    Read the article

  • Can't get InputStream read to block...

    - by mark dufresne
    I would like the input stream read to block instead of reading end of stream (-1). Is there a way to configure the stream to do this? Here's my Servlet code: PrintWriter out = response.getWriter(); BufferedReader in = request.getReader(); try { String line; int loop = 0; while (loop < 20) { line = in.readLine(); lgr.log(Level.INFO, line); out.println("<" + loop + "html>"); Thread.sleep(1000); loop++; // } } catch (InterruptedException ex) { lgr.log(Level.SEVERE, null, ex); } finally { out.close(); } Here's my Midlet code: private HttpConnection conn; InputStream is; OutputStream os; private boolean exit = false; public void run() { String url = "http://localhost:8080/WebApplication2/NewServlet"; try { conn = (HttpConnection) Connector.open(url); is = conn.openInputStream(); os = conn.openOutputStream(); StringBuffer sb = new StringBuffer(); int c; while (!exit) { os.write("<html>\n".getBytes()); while ((c = is.read()) != -1) { sb.append((char) c); } System.out.println(sb.toString()); sb.delete(0, sb.length() - 1); try { Thread.sleep(1000); } catch (InterruptedException ex) { ex.printStackTrace(); } } os.close(); is.close(); conn.close(); } catch (IOException ex) { ex.printStackTrace(); } } I've tried InputStream.read, but it doesn't block either, it returns -1 as well. I'm trying to keep the I/O streams on either side alive. I want the servlet to wait for input, process the input, then send back a response. In the code above it should do this 20 times. thanks for any help

    Read the article

  • programming question

    - by shivam
    using System; using System.Data; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Data.SqlClient; namespace datasynchronization { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { string connectString = @"Data Source=MOON\SQL2005;Initial Catalog=databaseA;Integrated Security=True"; using (var srcCon = new SqlConnection(connectString)) //connection to source table { srcCon.Open();//source table connection open SqlCommand cmd = new SqlCommand();// sqlobject for source table cmd.Connection = srcCon; string connectionString = @"Data Source=MOON\SQL2005;Initial Catalog=databaseB;Integrated Security=True"; using (var tgtCon = new SqlConnection(connectionString)) //connection to target table { tgtCon.Open(); //target table connection open SqlCommand objcmd1 = new SqlCommand();//sqlobject for target table objcmd1.Connection = tgtCon; objcmd1.CommandText = "SELECT MAX(date) FROM Table_2"; //query to findout the max date from target table var maxdate = objcmd1.ExecuteScalar(); // store the value of max date into the variable maxdate cmd.CommandText = string.Format("SELECT id,date,name,city,salary,region FROM Table_1 where date >'{0}'", maxdate); //select query to fetch rows from source table using (var reader = cmd.ExecuteReader()) { SqlCommand objcmd = new SqlCommand(); objcmd.Connection = tgtCon; objcmd.CommandText = "INSERT INTO Table_2(id,date,name,city,salary,region)VALUES(@id,@date,@name,@city,@salary,@region)"; objcmd.Parameters.Add("@id", SqlDbType.Int); objcmd.Parameters.Add("@date", SqlDbType.DateTime); objcmd.Parameters.Add("@name", SqlDbType.NVarChar); objcmd.Parameters.Add("@city", SqlDbType.NVarChar); objcmd.Parameters.Add("@salary", SqlDbType.Int); objcmd.Parameters.Add("@region", SqlDbType.Char); while (reader.Read()) { var order1 = reader[0].ToString(); var order2 = reader[1].ToString(); var order3 = reader[2].ToString(); var order4 = reader[3].ToString(); var order5 = reader[4].ToString(); var order6 = reader[5].ToString(); objcmd.Parameters["@id"].Value = order1; objcmd.Parameters["@date"].Value = order2; objcmd.Parameters["@name"].Value = order3; objcmd.Parameters["@city"].Value = order4; objcmd.Parameters["@salary"].Value = order5; objcmd.Parameters["@region"].Value = order6; objcmd.ExecuteNonQuery(); } } tgtCon.Close(); } srcCon.Close(); } } } } how can i organize the above written code in an efficient way?

    Read the article

  • Establishing a tcp connection from within a DLL

    - by Nicholas Hollander
    I'm trying to write a piece of code that will allow me to establish a TCP connection from within a DLL file. Here's my situation: I have a ruby application that needs to be able to send and receive data over a socket, but I can not access the native ruby socket methods because of the environment in which it will be running. I can however access a DLL file and run the functions within that, so I figured I would create a wrapper for winsock. Unfortunately, attempting to take a piece of code that should connect to a TCP socket in a normal C++ application throws a slew of LNK2019 errors that I can not for the life of me resolve. This is the method I'm using to connect: //Socket variable SOCKET s; //Establishes a connection to the server int server_connect(char* addr, int port) { //Start up Winsock WSADATA wsadata; int error = WSAStartup(0x0202, &wsadata); //Check if something happened if (error) return -1; //Verify Winock version if (wsadata.wVersion != 0x0202) { //Clean up and close WSACleanup(); return -2; } //Get the information needed to finalize a socket SOCKADDR_IN target; target.sin_family = AF_INET; //Address family internet target.sin_port = _WINSOCKAPI_::htons(port); //Port # target.sin_addr.s_addr = inet_addr(addr); //Create the socket s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (s == INVALID_SOCKET) { return -3; } //Try connecting if (connect(s, (SOCKADDR *)&target, sizeof(target)) == SOCKET_ERROR) { //Failed to connect return -4; } else { //Success return 1; } } The exact errors that I'm receiving are: Error 1 error LNK2019: unresolved external symbol _closesocket@4 referenced in function _server_disconnect [Project Path] Error 2 error LNK2019: unresolved external symbol _connect@12 referenced in function _server_connect [Project Path] Error 3 error LNK2019: unresolved external symbol _htons@4 referenced in function _server_connect [Project Path] Error 4 error LNK2019: unresolved external symbol _inet_addr@4 referenced in function _server_connect [Project Path] Error 5 error LNK2019: unresolved external symbol _socket@12 referenced in function _server_connect [Project Path] Error 6 error LNK2019: unresolved external symbol _WSAStartup@8 referenced in function _server_connect [Project Path] Error 7 error LNK2019: unresolved external symbol _WSACleanup@0 referenced in function _server_connect [Project Path] Error 8 error LNK1120: 7 unresolved externals [Project Path] 1 1 Many thanks!

    Read the article

  • Two seperate tm structs mirroring each other

    - by BSchlinker
    Here is my current situation: I have two tm structs, both set to the current time I make a change to the hour in one of the structs The change is occurring in the other struct magically.... How do I prevent this from occurring? I need to be able to compare and know the number of seconds between two different times -- the current time and a time in the future. I've been using difftime and mktime to determine this. I recognize that I don't technically need two tm structs (the other struct could just be a time_t loaded with raw time) but I'm still interested in understanding why this occurs. void Tracker::monitor(char* buffer){ // time handling time_t systemtime, scheduletime, currenttime; struct tm * dispatchtime; struct tm * uiuctime; double remainingtime; // let's get two structs operating with current time dispatchtime = dispatchtime_tm(); uiuctime = uiuctime_tm(); // set the scheduled parameters dispatchtime->tm_hour = 5; dispatchtime->tm_min = 05; dispatchtime->tm_sec = 14; uiuctime->tm_hour = 0; // both of these will now print the same time! (0:05:14) // what's linking them?? // print the scheduled time printf ("Current Time : %2d:%02d:%02d\n", uiuctime->tm_hour, uiuctime->tm_min, uiuctime->tm_sec); printf ("Scheduled Time : %2d:%02d:%02d\n", dispatchtime->tm_hour, dispatchtime->tm_min, dispatchtime->tm_sec); } struct tm* Tracker::uiuctime_tm(){ time_t uiucTime; struct tm *ts_uiuc; // give currentTime the current time time(&uiucTime); // change the time zone to UIUC putenv("TZ=CST6CDT"); tzset(); // get the localtime for the tz selected ts_uiuc = localtime(&uiucTime); // set back the current timezone unsetenv("TZ"); tzset(); // set back our results return ts_uiuc; } struct tm* Tracker::dispatchtime_tm(){ time_t currentTime; struct tm *ts_dispatch; // give currentTime the current time time(&currentTime); // get the localtime for the tz selected ts_dispatch = localtime(&currentTime); // set back our results return ts_dispatch; }

    Read the article

  • When is ¦ not equal to ¦?

    - by Trey Jackson
    Background. I'm working with netlists, and in general, people specify different hierarchies by using /. However, it's not illegal to actually use a / as a part of an instance name. For example, X1/X2/X3/X4 might refer to instance X4 inside another instance named X1/X2/X3. Or it might refer an instance named X3/X4 inside an instance named X2 inside an instance named X1. Got it? There's really no "regular" character that cannot be used as a part of an instance name, so you resort to a non-printable one, or ... perhaps one outside of the standard 0..127 ASCII chars. I thought I'd try (decimal) 166, because for me it shows up as the pipe: ¦. So... I've got some C++ code which constructs the path name using ¦ as the hierarchical separator, so the path above looks like X1¦X2/X3¦X4. Now the GUI is written in Tcl/Tk, and to properly translate this into human readable terms I need to do something like the following: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set humanreadable [join [split $path ¦] /] Basically, replace the ¦ with / (I could also accomplish this with [string map]). Now, the problem is, the ¦ in the string I get from C++ doesn't match the ¦ I can create in Tcl. i.e. This fails: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 string match $path [format X1%cX2/X3%cX4 166 166] Visually, the two strings look identical, but string match fails. I even tried using scan to see if I'd mixed up the bit values. But set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set path2 [format X1%cX2/X3%cX4 166 166] for {set i 0} {$i < [string length $path]} {incr i} { set p [string range $path $i $i] set p2 [string range $path2 $i $i] scan %c $p c scan %c $p2 c2 puts [list $p $c :::: $p2 $c2 equal? [string equal $c $c2]] } Produces output which looks like everything should match, except the [string equal] fails for the ¦ characters with a print line: ¦ 166 :::: ¦ 166 equal? 0 For what it's worth, the character in C++ is defined as: const char SEPARATOR = 166; Any ideas why a character outside the regular ASCII range would fail like this? When I changed the separator to (decimal) 28 (^\), things worked fine. I just don't want to get bit by a similar problem on a different platform. (I'm currently using Redhat Linux).

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >