Search Results

Search found 22627 results on 906 pages for 'program transformation'.

Page 338/906 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • C ] How can I handle weird errors from calculating acos / sin / atan2?

    - by Phrixus
    Has anyone seen this weird value while handling sin / cos/ tan / acos.. math stuff? ===THE WEIRD VALUE=== -1.#IND00 ===================== void inverse_pos(double x, double y, double& theta_one, double& theta_two) { // Assume that L1 = 350 and L2 = 250 double B = sqrt(x*x + y*y); double angle_beta = atan2(y, x); double angle_alpha = acos((L2*L2 - B*B - L1*L1) / (-2*B*L1)); theta_one = angle_beta + angle_alpha; theta_two = atan2((y-L1*sin(theta_one)), (x-L1*cos(theta_one))); } This is the code I was working on. In a particular condition - like when x & y are 10 & 10, this code stores -1.#IND00 into theta_one & theta_two. It doesn't look like either characters or numbers :( Without a doubt, atan2 / acos / stuff are the problems. But the problem is, try and catch doesn't work either cuz those double variables have successfully stored some values in them. Moreover, the following calculations never complain about it and never break the program! I'm thinking of forcing to use this value somehow and make the entire program crash... So that I can catch this error.. Except for that idea, I have no idea how I should check whether these theta_one and theta_two variables have stored this crazy values. Any good ideas? Thank you in advance..

    Read the article

  • On C++ global operator new: why it can be replaced

    - by Jimmy
    I wrote a small program in VS2005 to test whether C++ global operator new can be overloaded. It can. #include "stdafx.h" #include "iostream" #include "iomanip" #include "string" #include "new" using namespace std; class C { public: C() { cout<<"CTOR"<<endl; } }; void * operator new(size_t size) { cout<<"my overload of global plain old new"<<endl; // try to allocate size bytes void *p = malloc(size); return (p); } int main() { C* pc1 = new C; cin.get(); return 0; } In the above, my definition of operator new is called. If I remove that function from the code, then operator new in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\crt\src\new.cpp gets called. All is good. However, in my opinion, my implementations of operator new does NOT overload the new in new.cpp, it CONFLICTS with it and violates the one-definition rule. Why doesn't the compiler complain about it? Or does the standard say since operator new is so special, one-definition rule does not apply here? Thanks.

    Read the article

  • Replacing all GUIDs in a file with new GUIDs from the command line

    - by Josh Petrie
    I have a file containing a large number of occurrences of the string Guid="GUID HERE" (where GUID HERE is a unique GUID at each occurrence) and I want to replace every existing GUID with a new unique GUID. This is on a Windows development machine, so I can generate unique GUIDs with uuidgen.exe (which produces a GUID on stdout every time it is run). I have sed and such available (but no awk oddly enough). I am basically trying to figure out if it is possible (and if so, how) to use the output of a command-line program as the replacement text in a sed substitution expression so that I can make this replacement with a minimum of effort on my part. I don't need to use sed -- if there's another way to do it, such as some crazy vim-fu or some other program, that would work as well -- but I'd prefer solutions that utilize a minimal set of *nix programs since I'm not really on *nix machines. To be clear, if I have a file like this: etc etc Guid="A" etc etc Guid="B" I would like it to become this: etc etc Guid="C" etc etc Guid="D" where A, B, C, D are actual GUIDs, of course. (for example, I have seen xargs used for things similar to this, but it's not available on the machines I need this to run on, either. I could install it if it's really the only way, although I'd rather not)

    Read the article

  • Is there a term for this concept, and does it exist in a static-typed language?

    - by Strilanc
    Recently I started noticing a repetition in some of my code. Of course, once you notice a repetition, it becomes grating. Which is why I'm asking this question. The idea is this: sometimes you write different versions of the same class: a raw version, a locked version, a read-only facade version, etc. These are common things to do to a class, but the translations are highly mechanical. Surround all the methods with lock acquires/releases, etc. In a dynamic language, you could write a function which did this to an instance of a class (eg. iterate over all the functions, replacing them with a version which acquires/releases a lock.). I think a good term for what I mean is 'reflected class'. You create a transformation which takes a class, and returns a modified-in-a-desired-way class. Synchronization is the easiest case, but there are others: make a class immutable [wrap methods so they clone, mutate the clone, and include it in the result], make a class readonly [assuming you can identify mutating methods], make a class appear to work with type A instead of type B, etc. The important part is that, in theory, these transformations make sense at compile-time. Even though an ActorModel<T> has methods which change depending on T, they depend on T in a specific way knowable at compile-time (ActorModel<T> methods would return a future of the original result type). I'm just wondering if this has been implemented in a language, and what it's called.

    Read the article

  • Handling Corrupted JPEGs in C#

    - by ddango
    We have a process that pulls images from a remote server. Most of the time, we're good to go, the images are valid, we don't timeout, etc. However, every once and awhile we see this error similar to this: Unhandled Exception: System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+. at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderPa rameters encoderParams) at ConsoleApplication1.Program.Main(String[] args) in C:\images\ConsoleApplic ation1\ConsoleApplication1\Program.cs:line 24 After not being able to reproduce it locally, we looked closer at the image, and realized that there were artifacts, making us suspect corruption. Created an ugly little unit test with only the image in question, and was unable to reproduce the error on Windows 7 as was expected. But after running our unit test on Windows Server 2008, we see this error every time. Is there a way to specify non-strictness for jpegs when writing them? Some sort of check/fix we can use? Unit test snippet: var r = ReadFile("C:\\images\\ConsoleApplication1\\test.jpg"); using (var imgStream = new MemoryStream(r)) { using (var ms = new MemoryStream()) { var guid = Guid.NewGuid(); var fileName = "C:\\images\\ConsoleApplication1\\t" + guid + ".jpg"; Image.FromStream(imgStream).Save(ms, ImageFormat.Jpeg); using (FileStream fs = File.Create(fileName)) { fs.Write(ms.GetBuffer(), 0, ms.GetBuffer().Length); } } }

    Read the article

  • maximum memory which malloc can allocate!

    - by Vikas
    I was trying to figure out how much memory I can malloc to maximum extent on my machine (1 Gb RAM 160 Gb HD Windows platform). I read that maximum memory malloc can allocate is limited to physical memory.(on heap) Also when a program exceeds consumption of memory to a certain level, the computer stops working because other applications do not get enough memory that they require. So to confirm,I wrote a small program in C, int main(){ int *p; while(1){ p=(int *)malloc(4); if(!p)break; } } Hoping that there would be a time when memory allocation will fail and loop will be breaked. But my computer hanged as It was an infinite loop. I waited for about an hour and finally I had to forcely shut down my computer. Some questions: Does malloc allocate memory from HD also? What was the reason for above behaviour? Why didn't loop breaked at any point of time.? Why wasn't there any allocation failure?

    Read the article

  • Replace without the replace function

    - by Molly Potter
    Assignment: Let X and Y be two words. Find/Replace is a common word processing operation that finds each occurrence of word X and replaces it with word Y in a given document. Your task is to write a program that performs the Find/Replace operation. Your program will prompt the user for the word to be replaced (X), then the substitute word (Y ). Assume that the input document is named input.txt. You must write the result of this Find/Replace operation to a file named output.txt. Lastly, you cannot use the replace() string function built into Python (it would make the assignment much too easy). To test your code, you should modify input.txt using a text editor such as Notepad or IDLE to contain different lines of text. Again, the output of your code must look exactly like the sample output. This is my code: input_data = open('input.txt','r') #this opens the file to read it. output_data = open('output.txt','w') #this opens a file to write to. userStr= (raw_input('Enter the word to be replaced:')) #this prompts the user for a word userReplace =(raw_input('What should I replace all occurences of ' + userStr + ' with?')) #this prompts the user for the replacement word for line in input_data: words = line.split() if userStr in words: output_data.write(line + userReplace) else: output_data.write(line) print 'All occurences of '+userStr+' in input.txt have been replaced by '+userReplace+' in output.txt' #this tells the user that we have replaced the words they gave us input_data.close() #this closes the documents we opened before output_data.close() It won't replace anything in the output file. Help!

    Read the article

  • Loading saved byte array to memory stream causes out of memory exception

    - by user2320861
    At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object. MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray using (MemoryStream stream = new MemoryStream()) { this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp); myDataRow.BackgroundImageByteArray = stream.ToArray(); } Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception: using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray)) { this.Panel.BackgroundImage = Image.FromStream(stream); } The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap? P.S. I've also tried using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length)) { stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here. }

    Read the article

  • Sharing a COM port over TCP

    - by guinness
    What would be a simple design pattern for sharing a COM port over TCP to multiple clients? For example, a local GPS device that could transmit co-ordinates to remote hosts in realtime. So I need a program that would open the serial port and accept multiple TCP connections like: class Program { public static void Main(string[] args) { SerialPort sp = new SerialPort("COM4", 19200, Parity.None, 8, StopBits.One); Socket srv = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); srv.Bind(new IPEndPoint(IPAddress.Any, 8000)); srv.Listen(20); while (true) { Socket soc = srv.Accept(); new Connection(soc); } } } I would then need a class to handle the communication between connected clients, allowing them all to see the data and keeping it synchronized so client commands are received in sequence: class Connection { static object lck = new object(); static List<Connection> cons = new List<Connection>(); public Socket socket; public StreamReader reader; public StreamWriter writer; public Connection(Socket soc) { this.socket = soc; this.reader = new StreamReader(new NetworkStream(soc, false)); this.writer = new StreamWriter(new NetworkStream(soc, true)); new Thread(ClientLoop).Start(); } void ClientLoop() { lock (lck) { connections.Add(this); } while (true) { lock (lck) { string line = reader.ReadLine(); if (String.IsNullOrEmpty(line)) break; foreach (Connection con in cons) con.writer.WriteLine(line); } } lock (lck) { cons.Remove(this); socket.Close(); } } } The problem I'm struggling to resolve is how to facilitate communication between the SerialPort instance and the threads. I'm not certain that the above code is the best way forward, so does anybody have another solution (the simpler the better)?

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Accessing the value of an input element with XPath in an XSLT

    - by asymmetric
    Hi! I'm developing a web app that has a button that triggers an XSLT transformation of the document DOM, with a stylesheet fetched via AJAX. Here's a portion of the HTML: <html> <head> <title>Static Javascript-based XMR Form Creator</title> </head> <body> <h1 id="title">Static Javascript-based XMR Form Creator</h1> <div class="opt_block" id="main_opts"> Form name <input type="text" id="form_name" /> Form cols <input type="text" id="form_cols" size="3" maxlength="3" /> </div> <button id="generate">Generate source</button> <textarea rows="20" cols="50" id="xmr_source" ></textarea> </body> Inside the stylesheet, I want to access the value attribute of the first input field, the one with id form_name. The XSLT looks like this: <xsl:template match="/html/body/div[@id = 'main_opts']" > <form> <xsl:attribute name="fname"> <xsl:value-of select="input[@id = 'form_name']/@value" /> </xsl:attribute> </form> </xsl:template> The problem is that the XPath that should do the work: <xsl:value-of select="input[@id = 'form_name']/@value" /> returns nothing. Can anyone help?

    Read the article

  • Packing an exe + dll into one executable (not .NET)

    - by Bluebird75
    Hi, Is anybody aware of a program that can pack several DLL and a .EXE into one executable. I am not talking about .NET case here, I am talking about general DLLs, some of which I generate in C++, some of others are external DLL I have no control over. My specific case is a python program packaged with py2exe, where I would like to "hide" the other DLL by packing them. The question is general enough though. The things that had a look at: ILMerge: specific to .NET NETZ: specific to .NET UPX: does DLL compression but not multiple DLL + EXE packing FileJoiner: Almost got it. It can pack executable + anything into one exe but when opened, it will launch the default opener for every file that was packed. So, if the user user dlldepend installed, it will launch it (becaues that's the default dll opener). Maybe that's not possible ? Summary of the answers: DLL opening is managed by the OS, so packing DLL into executable means that at some point, they need to be extracted to a place where the OS can find them. No magic bullet. So, what I want is not possible. Unless... We change something in the OS. Thanks Conrad for pointing me to ThinInstall, which virtualise the application and the OS loading mechanism. With ThinInstall, it is possible to pack everything in one exe (DLL, registry settings, ...).

    Read the article

  • In which year was the date the same as the original year?

    - by Marta
    It's my first question on this site, but I always found this site really useful. What I mean with my question is: you ask the person to give a date (eg. Fill in a date [dd-mm-yyyy]: 16-10-2013) you than have to ask an interval between 2 years (eg. Give an interval [yyyy-yyyy]:1800-2000) When the program runs, it has to show what day of the week the given date is. In this case it was a Wednesday. Than the program has to look in which year, in between the interval, the date 16 October also fell on a Wednesday. So in the end it has to look something like this: Fill in a date: [dd-mm-yyyy]: 16-10-2013 Give an interval [yyyy-yyyy]: 1900-2000 16 October was a wednesday in the following years: 1905 1911 1916 1922 1933 1939 1944 1950 1961 1967 1972 1978 1989 1995 2000 The full date is Wednesday 16 October, 2013 The small (or biggest) problem is, I am not allowed to use the DATE.function in java. If someone can help me with the second part I would be really really happy, cause I have no idea how I am supposed to do this To find out what day of the week the given date falls, I use the Zeller Congruence class Day { Date date; //To grab the month and year form the Date class int day; final static String[] DAYS_OF_WEEK = { "Saturday", "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday" }; public void dayWeekInterval{ //code to grab the interval from interval Class, and doing stuff here } public void dayOfTheWeek { int m = date.getMonth(); int y = date.getYear(); if (m < 3) { m += 12; y -= 1; } int k = y % 100; int j = y / 100; int day = ((q + (((m + 1) * 26) / 10) + k + (k / 4) + (j / 4)) + (5 * j)) % 7; return day; } public string ToString(){ return "" + DAYS_OF_WEEK[day] + day; } }

    Read the article

  • How do I make VC++'s debugger break on exceptions?

    - by Mason Wheeler
    I'm trying to debug a problem in a DLL written in C that keeps causing access violations. I'm using Visual C++ 2008, but the code is straight C. I'm used to Delphi, where if an exception occurs while running under the debugger, the program will immediately break to the debugger and it will give you a chance to examine the program state. In Visual C++, though, all I get is a message in the Output tab: First-chance exception at blah blah blah: Access violation reading location 0x04410000. No breaks, nothing. It just goes and unwinds the stack until it's back in my Delphi EXE, which recognizes something's wrong and alerts me there, but by that point I've lost several layers of call stack and I don't know what's going on. I've tried other debugging techniques, but whatever it's doing is taking place deep within a nested loop inside a C macro that's getting called more than 500 times, and that's just a bit beyond my skill (or my patience) to trace through. I figure there has to be some way to get the "first-chance" exception to actually give me a "chance" to handle it. There's probably some "break immediately on first-chance exceptions" configuration setting I don't know about, but it doesn't seem to be all that discoverable. Does anyone know where it is and how to enable it?

    Read the article

  • IBM WESB/WAS JCA security configuration

    - by user594883
    I'm working with IBM tools. I have a Websphere ESB (WESB) and a CICS transaction gateway (CTG). The basic set-up is as follows: A SOAP service needs data from CICS. The SOAP-service is connecting to service bus (WESB) to handle data and protocol transformation and then WESB calls the CTG which in turn calls CICS and the reply is handled vice verse (synchronously). WESB calls the CTG using Resource Adapter and JCA connector (or CICS adapter as it is called in WESB). Now, I have all the pieces in place and working. My question is about the security, and even though I'm working with WESB, the answer is probably the same as in Websphere Application Server (WAS). The Resource Adaper is secured using JAAS - J2C authentication data. I have configured the security using J2C authentication data entry, so basically I have a reference in the application I'm running and at runtime the application does a lookup for the security attributes from the server. So basically I'm always accessing the CICS adapter with the same security reference. My problem is that I need to access the resource in more dynamic way in the future. The security cannot be welded into the application anymore but instead given as a parameter. Could some WESB or WAS guru help me out, how this could be done in WESB/WAS exactly?

    Read the article

  • Is there an x86 or x64 emulator that passes system calls back to the Windows API?

    - by Chris Lomont
    I want to emulate windows programs (not VM, true emulation) under windows. This would require the emulator to make calls back to the system APIs, but the program itself would be emulated. The reason is I want to change the opcode formats for research purposes. The process should be: Take existing program. Disassemble and then reassemble with my new opcode formats. Put the new format into the PE with a stub calling the emulator and passing the new code. The emulator would have to pass system calls from the emulated side back to windows API calls. I can do all these steps, except I need an open source emulator with the ability to pass the API calls out. I could try Bochs or QEMU, but I think I'd have to add in the system calls, which I could do if needed. I wonder if there is already something closer to what I need. I know I would have to change the instruction decoding in the emulator to match my new formats, but that is a given. Thanks.

    Read the article

  • Is it possible to spoof or reuse VIEWSTATE or detect if it is protected from modification?

    - by Peter Jaric
    Question ASP and ASP.NET web applications use a value called VIEWSTATE in forms. From what I understand, this is used to persist some kind of state on the client between requests to the web server. I have never worked with ASP or ASP.NET and need some help with two questions (and some sub-questions): 1) Is it possible to programmatically spoof/construct a VIEWSTATE for a form? Clarification: can a program look at a form and from that construct the contents of the base64-encoded VIEWSTATE value? 1 a) Or can it always just be left out? 1 b) Can an old VIEWSTATE for a particular form be reused in a later invocation of the same form, or would it just be luck if that worked? 2) I gather from http://msdn.microsoft.com/en-us/library/ms972976.aspx#viewstate_topic12 that it is possible to turn on security so that the VIEWSTATE becomes secure from spoofing. Is it possible for a program to detect that a VIEWSTATE is safeguarded in such a way? 2 a) Is there a one-to-one mapping between the occurrence of EVENTVALIDATION values and secure VIEWSTATEs? Regarding 1) and 2), if yes, can I have a hint about how I would do that? For 2) I am thinking I could base64-decode the value and search for a string that always is found in unencrypted VIEWSTATEs. "First:"? Something else? Background I have made a small tool for detecting and exploiting so called CSRF vulnerabilities. I use it to quickly make proof of concepts of such vulnerabilities that I send to the affected site owners. Quite often I encounter these forms with a VIEWSTATE, and these I don't know if they are secure or not. Edit 1: Clarified question 1 somewhat. Edit 2: Added text in italics.

    Read the article

  • Can get members, but not count of NSMutableArray

    - by Curyous
    I'm filling an NSMutableArray from a CoreData call. I can get the first object, but when I try to get the count, the app crashes with Program received signal: “EXC_BAD_ACCESS”. How can I get the count? Here's the relevant code - I've put a comment on the line where it crashes. - (void)viewDidLoad { [super viewDidLoad]; managedObjectContext = [[MySingleton sharedInstance] managedObjectContext]; if (managedObjectContext != nil) { charactersRequest = [[NSFetchRequest alloc] init]; charactersEntity = [NSEntityDescription entityForName:@"Character" inManagedObjectContext:managedObjectContext]; [charactersEntity retain]; [charactersRequest setEntity:charactersEntity]; [charactersRequest retain]; NSError *error; characters = [[managedObjectContext executeFetchRequest:charactersRequest error:&error] mutableCopy]; if (characters == nil) { NSLog(@"Did not get results for characters: %@", error.localizedDescription); } else { [characters retain]; NSLog(@"Found some character(s)."); Character* character = (Character *)[characters objectAtIndex:0]; NSLog(@"Name of first one: %@", character.name); NSLog(@"Found %@ character(s).", characters.count); // Crashes on this line with - Program received signal: “EXC_BAD_ACCESS”. } } } And previous declarations from the header file: @interface CrowdViewController : UITableViewController { NSManagedObjectContext *managedObjectContext; NSFetchRequest *charactersRequest; NSEntityDescription *charactersEntity; NSMutableArray *characters; } I'm a bit perplexed and would really appreciate finding out what is going on.

    Read the article

  • png image blurry when loaded onto texture

    - by Chris
    I have created a png image in photoshop with transparencies that I have loaded into and OpenGL program. I have binded it to a texture and in the program the picture looks blurry and I'm not sure why. Loading Code // Texture loading object nv::Image title; // Return true on success if(title.loadImageFromFile("test.png")) { glGenTextures(1, &titleTex); glBindTexture(GL_TEXTURE_2D, titleTex); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, title.getInternalFormat(), title.getWidth(), title.getHeight(), 0, title.getFormat(), title.getType(), title.getLevel(0)); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0f); } else MessageBox(NULL, "Failed to load texture", "End of the world", MB_OK | MB_ICONINFORMATION); Display Code glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glBindTexture(GL_TEXTURE_2D, titleTex); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glTranslatef(-800, 0, 0.0); glColor3f(1,1,1); glBegin(GL_QUADS); glTexCoord2f(0.0, 0.0); glVertex2f(0,0); glTexCoord2f(0.0, 1.0); glVertex2f(0,600); glTexCoord2f(1.0, 1.0); glVertex2f(1600,600); glTexCoord2f(1.0, 0.0); glVertex2f(1600,0); glEnd(); glDisable(GL_BLEND); glDisable(GL_TEXTURE_2D);

    Read the article

  • What can I do to get Mozilla Firefox to preload the eventual image result?

    - by Dalal
    I am attempting to preload images using JavaScript. I have declared an array as follows with image links from different places: var imageArray = new Array(); imageArray[0] = new Image(); imageArray[1] = new Image(); imageArray[2] = new Image(); imageArray[3] = new Image(); imageArray[0].src = "http://www.bollywoodhott.com/wp-content/uploads/2008/12/arjun-rampal.jpg"; imageArray[1].src = "http://labelleetleblog.files.wordpress.com/2009/06/josie-maran.jpg"; imageArray[2].src = "http://1.bp.blogspot.com/_22EXDJCJp3s/SxbIcZHTHTI/AAAAAAAAIXc/fkaDiOKjd-I/s400/black-male-model.jpg"; imageArray[3].src = "http://www.iill.net/wp-content/uploads/images/hot-chick.jpg"; The image fade and transformation effects that I am doing using this array work properly for the first 3 images, but for the last one, imageArray[3], the actual image data of the image does not get preloaded and it completely ruins the effect, since the actual image data loads AFTERWARDS, only at the time it needs to be displayed, it seems. This happens because the last link http://www.iill.net/wp-content/uploads/images/hot-chick.jpg is not a direct link to the image. If you go to that link, your browser will redirect you to the ACTUAL location. Now, my image preloading code in Chrome works perfectly well, and the effects look great. Because it seems that Chrome preloads the actual data - the EVENTUAL image that is to be shown. This means that in Chrome if I preloaded an image that will redirect to 'stop stealing my bandwidth', then the image that gets preloaded is 'stop stealing my bandwidth'. How can I modify my code to get Firefox to behave the same way?

    Read the article

  • EXEC_BAD_ACCESS error comes in my applicatoin in iphone

    - by Jaimin
    when i print dictionary i got this error.. here my dictTf is mutabledictionay.. when i m in home page i selct few fields and click find. so new view comes with the result.. now i go back and again click find without changing anything.. now comes proper.. now at this moment when i go back it shows this in the dictionay and EXEC_BAD_ACCESS eror comes... Printing description of dictTf: The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (CFShow) will be abandoned. The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (CFShow) will be abandoned. (gdb)

    Read the article

  • Usage of VIsual Memory Leak Detector

    - by Yan Cheng CHEOK
    I found a very interesting memory leak detector by using Visual C++. http://www.codeproject.com/KB/applications/visualleakdetector.aspx I try it out, but cannot make it works to detect a memory leak code. I am using MS Visual Studio 2008. Any step I had missed out? #include "stdafx.h" #include "vld.h" #include <iostream> void fun() { new int[1000]; } int _tmain(int argc, _TCHAR* argv[]) { fun(); std::cout << "lead?" << std::endl; getchar(); return 0; } The output when I run in debug mode is : ... ... 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll', Symbols loaded. 'Test.exe': Loaded 'C:\WINDOWS\system32\msvcrt.dll', Symbols loaded (source information stripped). 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC90.DebugCRT_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_f863c71f\msvcp90d.dll', Symbols loaded. 'Test.exe': Loaded 'C:\Program Files\Visual Leak Detector\bin\dbghelp.dll', Symbols loaded (source information stripped). Visual Leak Detector Version 1.9d installed. No memory leaks detected. Visual Leak Detector is now exiting. The program '[5468] Test.exe: Native' has exited with code 0 (0x0).

    Read the article

  • User input in Perl with IO::Socket

    - by David
    I am trying to make a perl program which allows a user to input the host and the port number of a foreign host to connect to using IO::Socket. It allows me to run the program and input a host and a port but it never connects and says "Could not connect to [host] at c:\users\USER\Documents\code\perl\sql.pl line 18, line 2." What am i doing wrong with this code shown below? And also, how can i have input validation on my host, which can either be a host name or an ip address? Thanks a bunch! Code Below use IO::Socket print "Host to connect to: "; chomp ($host = <STDIN>); print "Port to connect with: "; chomp ($port = <STDIN>); while(($port > 65535) || ($port <= 0)){ print "Port to connect with [Port > 0 < 65535] : "; chomp ($port = <STDIN>); } print "\nConnecting to host $host on port $port\n"; $socket = new IO::Socket::INET ( LocalHost => '$host', LocalPort => '$port', Proto => 'tcp', Listen => 5, Reuse => 1 ); die "Could not connect to $host";

    Read the article

  • Memory leaks after using typeinfo::name()

    - by icabod
    I have a program in which, partly for informational logging, I output the names of some classes as they are used (specifically I add an entry to a log saying along the lines of Messages::CSomeClass transmitted to 127.0.0.1). I do this with code similar to the following: std::string getMessageName(void) const { return std::string(typeid(*this).name()); } And yes, before anyone points it out, I realise that the output of typeinfo::name is implementation-specific. According to MSDN The type_info::name member function returns a const char* to a null-terminated string representing the human-readable name of the type. The memory pointed to is cached and should never be directly deallocated. However, when I exit my program in the debugger, any "new" use of typeinfo::name() shows up as a memory leak. If I output the information for 2 classes, I get 2 memory leaks, and so on. This hints that the cached data is never being freed. While this is not a major issue, it looks messy, and after a long debugging session it could easily hide genuine memory leaks. I have looked around and found some useful information (one SO answer gives some interesting information about how typeinfo may be implemented), but I'm wondering if this memory should normally be freed by the system, or if there is something i can do to "not notice" the leaks when debugging. I do have a back-up plan, which is to code the getMessageName method myself and not rely on typeinfo::name, but I'd like to know anyway if there's something I've missed.

    Read the article

  • How can I compile .NET 3.5 C# code on a system without Visual Studio?

    - by JimEvans
    I have some C# code that uses some constructs specific to .NET 3.5. When you install the .NET Framework distribution, you get the C# compiler installed with it (csc.exe). Even if I specify the csc.exe in C:\Windows\Microsoft.NET\Framework\v3.5, I cannot compile the code on a computer with only the .NET Framework installed, but not Visual Studio. I am able to compile code that uses v2.0 constructs without difficulty. How can I accomplish this? Here is a sample that demonstrates my problem: using System; class Program { public static void Main() { // The MacOSX value to the PlatformID enum was added after // .NET v2.0 if (Environment.OSVersion.Platform == PlatformID.MacOSX) { Console.WriteLine("Found mac"); } Console.WriteLine("Simple program"); } } When compiling this code using csc.exe, I receive the following error: test.cs(9, 58): error CS0117: 'System.PlatformID' does not contain a definition for 'MacOSX' When executing csc.exe /? I receive the banner: Microsoft (R) Visual C# 2008 Compiler version 3.5.21022.8 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >