Search Results

Search found 13401 results on 537 pages for 'double checked'.

Page 391/537 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • How do I prevent Ubuntu gradually slowing down to a stop?

    - by user29165
    I really do not know what is causing the problem of my desktop environment (D.E.) gradually slowing down. It seems to happen in Gnome 3.2, LXDE, XFCE, and KDE. I am running Ubuntu 11.10 and this problem started happening from a fresh install. I have noticed that if I restart Gnome or otherwise re-log-in with any of the D.E.s the speed resets to normal. I don't seem to have and memory leaks that account for it and there are not any processes that are using excessive CPU cycles. Due to the symptoms I am guessing that there is an underlying package that interacts with all D.E.s that is the source of the problem. Just to clarify, in extreme situations, if I let the slowdown continue without a restart of the D.E. my system will finally get to a point where everything, even my clock, freezes. The time over which Gnome slows down (noticeably after 17 hrs) is much less than LXDE, or XFCE but eventually they freeze as well. I didn't mention Unity since I haven't used it enough to verify the problem, but I don't see why it should be different. Just in case this is relevant, I Suspend my computer rather than turn it off. I do this because of the greatly reduced load time and the 17 hrs I stated above is actual up-time, not time where the D.E. is actually active. However, the slow down does not seem to be affected by how long the computer has been in suspend mode. I know that it is possible that this problem is due to an interaction between two or more of the applications that I use and as such it may not be able to be duplicated by others. In the end I am just wondering (a) are other people experiencing this issue and (b) does anyone have some advice on where to start looking for a solution if one is not already known. I am even open to the idea of switching to the beta release of 12.04 if anyone thinks it will solve the problem. Edit: I took a video of my latest freeze that you can watch at http://youtu.be/flKUqUzCmdE, sorry about the audio. Edit: Since that video I checked my memory for errors and tried installing the proprietary video drivers. No memory errors were found and the proprietary video drivers made the display unusable so I had to uninstall them. Any thoughts?

    Read the article

  • Is Ubuntu recognizing and/or using my NVIDIA graphics card?

    - by user212860
    This is my first post here, and I'm pretty new to Ubuntu/Linux. I currently have no other OS except for Ubuntu 13.10. (I used to have Win7 until i got a new terabyte hard drive). My current PC build, if any of this helps: CPU: Intel i5 quad-core Graphics: NVIDIA GeForce GTX 650 RAM: 8 GB HDD: 1 TB SATA 3 Motherboard: MSi Z77 A-G41 OS Ubuntu 13.10 So I recently installed Ubuntu 13.10 and put Steam on it, and I'm seeing that my games run a lot slower than they did when I had Win7. I figured it was a graphics problem, so I checked System Settings Details Overview. It says in "Graphics" that I have "Gallium 0.4 on NVE7" (don't really know what that is). Does this mean that Ubuntu is not using my graphics card? In System Settings Software & Updates Additional Drivers, it clearly shows like this: NVIDIA Corporation: GK107 [GeForce GTX 650] -This device is using an alternative driver (And then it shows a list of drivers that I can switch back and forth to) So this is a bit confusing. In Software and Updates, it clearly shows that I have my NVIDIA card installed, and that I have a driver selected for it. But in System Settings, it shows I have some Gallium 0.4 thing. I had done a bit of research, and ended up typing command: "lspci|grep VGA" in the Terminal. It showed this in response: VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX 650] (rev a1) The Terminal seems to recognize my graphics card. What it looks like to me, is that I don't have the proper driver, and I might be using my CPU's integrated graphics. When I switch around which driver I am using in that list, it still does not see my card in System Settings. Some of the drivers in the list give me some sort of OpenGL error when I try to run a game. It might just be that my games are running slow because the game developers have not optimized it for Ubuntu that well. However, that still doesn't take away from the fact that System Settings is not showing my NVIDIA card. TL;DR Version: How do I know if my video card is being recognized/used? If my video card is not being used, what is the best way fix that? Please make your answers easy to understand. I do not mind wordy responses, as long as I can follow what you're saying. Any help would be greatly appreciated! Thanks, Jabber5

    Read the article

  • 12.04 Booting into Terminal

    - by user170796
    To preface this, I would like to say that I am completely new to Ubuntu and have essentially zero programming experience/experience working with command line and terminal. I installed Ubuntu because I would like to get into programming. If you could provide me with the simplest instructions possible, I would be grateful. I have a Lenovo Ideapad Y500 (Intel i7, NVidia GT 750m, 1TB HDD, 16GB SSD cache, 8GB RAM) with Windows 8 on it. Using a Live CD, I installed Ubuntu 12.04 onto a 75 GB partition. During the installation, I kept all default settings except for one thing; I decided to encrypt my home folder, and so checked the corresponding box. The installation completed, and I restarted. Once I restarted, I saw the options "Ubuntu, with Linux 3.2.0-23-generic" "Ubuntu, with Linux 3.2.0-23-generic (recovery mode)" "Memory test (memtest86+)" "Memory test (memtest86+, serial console 115200)" "Windows Recovery Environment (loader) (on /dev/sdb3)" "Windows 8 (loader) (on /dev/sdb5)" "System Setup" I chose the first option, and was directed to a screen with the Ubuntu logo and the row of five dots below that change from orange to white. Then, I was brought to a full screen terminal that prompted me to login, which I did. I saw no option to boot into GUI at all, and am lost. I've been searching around and have tried the "startx" command to no avail. Should the command have some sort of context or something? I've also tried selecting the recovery mode option from the boot manager. I've tried the resume option from the following menu, which eventually just shuts down the computer after displaying a lot of scrolling text that's too fast for me to read. I've also tried the failsafex mode from the recovery mode menu, which only brings up a terminal box at the bottom of the window that covers the entire bottom part of the screen. Commands won't work in this window. When I try to access Windows 8, I get a message saying that the EFI file path was not specified or something along those lines. I had to enable Secure Boot in order to access Windows 8 (I had disabled it to be able to boot from the Live CD), which is functioning normally. I am at a complete loss for what to do. Any help will be extremely appreciated. EDIT: Bonus question! If you could figure out a way for me to boot to Windows 8 without having to enable Secure Boot, it would save me a lot of trouble. I can deal with switching every time, but I'd rather not have to.

    Read the article

  • Efficient way to find unique elements in a vector compared against multiple vectors

    - by SyncMaster
    I am trying find the number of unique elements in a vector compared against multiple vectors using C++. Suppose I have, v1: 5, 8, 13, 16, 20 v2: 2, 4, 6, 8 v3: 20 v4: 1, 2, 3, 4, 5, 6, 7 v5: 1, 3, 5, 7, 11, 13, 15 The number of unique elements in v1 is 1 (i.e. number 16). I tried two approaches. Added vectors v2,v3,v4 and v5 into a vector of vector. For each element in v1, checked if the element is present in any of the other vectors. Combined all the vectors v2,v3,v4 and v5 using merge sort into a single vector and compared it against v1 to find the unique elements. Note: sample_vector = v1 and all_vectors_merged contains v2,v3,v4,v5 //Method 1 unsigned int compute_unique_elements_1(vector<unsigned int> sample_vector,vector<vector<unsigned int> > all_vectors_merged) { unsigned int duplicate = 0; for (unsigned int i = 0; i < sample_vector.size(); i++) { for (unsigned int j = 0; j < all_vectors_merged.size(); j++) { if (std::find(all_vectors_merged.at(j).begin(), all_vectors_merged.at(j).end(), sample_vector.at(i)) != all_vectors_merged.at(j).end()) { duplicate++; } } } return sample_vector.size()-duplicate; } // Method 2 unsigned int compute_unique_elements_2(vector<unsigned int> sample_vector, vector<unsigned int> all_vectors_merged) { unsigned int unique = 0; unsigned int i = 0, j = 0; while (i < sample_vector.size() && j < all_vectors_merged.size()) { if (sample_vector.at(i) > all_vectors_merged.at(j)) { j++; } else if (sample_vector.at(i) < all_vectors_merged.at(j)) { i++; unique ++; } else { i++; j++; } } if (i < sample_vector.size()) { unique += sample_vector.size() - i; } return unique; } Of these two techniques, I see that Method 2 gives faster results. 1) Method 1: Is there a more efficient way to find the elements than running std::find on all the vectors for all the elements in v1. 2) Method 2: Extra overhead in comparing vectors v2,v3,v4,v5 and sorting them. How can I do this in a better way?

    Read the article

  • What is Causing This Memory Leak in Delphi?

    - by lkessler
    I just can't figure out this memory leak that EurekaLog is reporting for my program. I'm using Delphi 2009. Here it is: Memory Leak: Type=Data; Total size=26; Count=1; The stack is: System.pas _UStrSetLength 17477 System.pas _UStrCat 17572 Process.pas InputGedcomFile 1145 That is all there is in the stack. EurekaLog is pointing me to the location where the memory that was not released was first allocated. According to it, the line in my program is line 1145 of InputGedcomFile. That line is: CurStruct0Key := 'HEAD' + Level0Key; where CurStruct0Key and Level0Key are simply defined in the procedure as local variables that should be dynamically handled by the Delphi memory manager when entering and leaving the procedure: var CurStruct0Key, Level0Key: string; So now I look at the _UStrCat procedure in the System Unit. Line 17572 is: CALL _UStrSetLength // Set length of Dest and I go to the _UStrSetLength procedure in the System Unit, and the relevant lines are: @@isUnicode: CMP [EAX-skew].StrRec.refCnt,1 // !!! MT safety JNE @@copyString // not unique, so copy SUB EAX,rOff // Offset EAX "S" to start of memory block ADD EDX,EDX // Double length to get size JO @@overflow ADD EDX,rOff+2 // Add string rec size JO @@overflow PUSH EAX // Put S on stack MOV EAX,ESP // to pass by reference CALL _ReallocMem POP EAX ADD EAX,rOff // Readjust MOV [EBX],EAX // Store MOV [EAX-skew].StrRec.length,ESI MOV WORD PTR [EAX+ESI*2],0 // Null terminate TEST EDI,EDI // Was a temp created? JZ @@exit PUSH EDI MOV EAX,ESP CALL _LStrClr POP EDI JMP @@exit where line 17477 is the "CALL _ReallocMem" line. So then what is the memory leak? Surely a simple concatenate of a string constant to a local string variable should not be causing a memory leak. Why is EurekaLog pointing me to the ReallocMem line in a _UStrSetLength routine that is part of Delphi? This is Delphi 2009 and I am using the new unicode strings. Any help or explanation here will be much appreciated.

    Read the article

  • How to add a blank page to a pdf using iTextSharp?

    - by Russell
    I am trying to do something I thought would be quite simple, however it is not so straight forward and google has not helped. I am using iTextSharp to merge PDF documents (letters) together so they can all be printed at once. If a letter has an odd number of pages I need to append a blank page, so we can print the letters double-sided. Here is the basic code I have at the moment for merging all of the letters: // initiaise MemoryStream pdfStreamOut = new MemoryStream(); Document document = null; MemoryStream pdfStreamIn = null; PdfReader reader = null; int numPages = 0; PdfWriter writer = null; for int(i = 0;i < letterList.Count; i++) { byte[] myLetterData = ...; pdfStreamIn = new MemoryStream(myLetterData); reader = new PdfReader(pdfStreamIn); numPages = reader.NumberOfPages; // open the streams to use for the iteration if (i == 0) { document = new Document(reader.GetPageSizeWithRotation(1)); writer = PdfWriter.GetInstance(document, pdfStreamOut); document.Open(); } PdfContentByte cb = writer.DirectContent; PdfImportedPage page; int importedPageNumber = 0; while (importedPageNumber < numPages) { importedPageNumber++; document.SetPageSize(reader.GetPageSizeWithRotation(importedPageNumber)); document.NewPage(); page = writer.GetImportedPage(reader, importedPageNumber); cb.AddTemplate(page, 1f, 0, 0, 1f, 0, 0); } } I have tried using: document.SetPageSize(reader.GetPageSizeWithRotation(1)); document.NewPage(); at the end of the for loop for an odd number of pages without success. Any help would be much appreciated!

    Read the article

  • Android beginner: Touch events in android gridview

    - by jja
    I am using the following code to do things with gridview(slightly modified from http://developer.android.com/resources/tutorials/views/hello-gridview.html). I want to replace the onClicklistener and the onClick() method with their "touch" equivalents i.e. touchlistener and onTouch() so that when i touch an element in the gridview the image of the element changes and a double touch on the same element takes it back to the orginal state. How do I do this? I can't get my code to do this. The clicklistener works to some extent but the touch isn't. Please help. public class ImageAdapter extends BaseAdapter { private Context mContext; public ImageAdapter(Context c) { mContext = c; } public int getCount() { return mThumbIds.length; } public Object getItem(int position) { return null; } public long getItemId(int position) { return 0; } // create a new ImageView for each item referenced by the Adapter public View getView(int position, View convertView, ViewGroup parent) { ImageView imageView; if (convertView == null) { // if it's not recycled, initialize some attributes imageView = new ImageView(mContext); imageView.setLayoutParams(new GridView.LayoutParams(85, 85)); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); imageView.setPadding(8, 8, 8, 8); imageView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if(position==0) { //do this } else { //do this } } }); } else { imageView = (ImageView) convertView; } imageView.setImageResource(mThumbIds[position]); return imageView; } // references to our images private Integer[] mThumbIds = { R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7, R.drawable.sample_0, R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7, R.drawable.sample_0, R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7 }; }

    Read the article

  • Code sign error with Xcode 3.2

    - by quux
    I had a fully working build environment before upgrading to iPhone OS 3.1 and Xcode 3.2. Now when I try to do a build, i get the following: Code Sign error: Provisioning profile 'FooApp test' specifies the Application Identifier 'no.fooapp.iphoneapp' which doesn't match the current setting 'TGECMYZ3VK.no.fooapp.iphoneapp' The problem is that Xcode somehow manages to think that the "FooApp Test" provisioning profile specifies the Application Identifier "no.fooapp.iphoneapp", but this is not the case. In the Organizer (and in the iPhone developer portal website) the app identifier is correctly seen as 'TGECMYZ3VK.no.fooapp.iphoneapp'. Also, when setting the provisioning profile in the build options at the project level, Xcode correctly identifies the app identifier, but when I go to the target, I'm unable to select any valid provisioning profile. What could be causing this problem? Update: I've tried to create a new provisioning profile, but still no luck. I also tried simply changing the app identified in Info.plist to just "no.fooapp.iphoneapp". The build succeeds, but now I get an error from the Organizer: The executable was signed with invalid entitlements. The entitlements specified in your application's Code Signing Entitlements file do not match those specified in your provisioning profile. (0xE8008016). This seems reasonable, as the provisioning profile still has the "TGECMYZ3VK.no.fooapp.iphoneapp" application identifier. I also double checked that all certiicates are valid in the Keychain. So my question is how I can get Xcode to see the correct application identifier? UPDATE: As noted below, what seems to fix the problem is deleting all provisioning profiles, certificates etc, making new certificates / profiles and installing them again. If anyone has any other solutions, they would be welcome. :)

    Read the article

  • WatiN SelectList Methods - Page not refreshing/actions not being fired after interacting with a sele

    - by Chad M
    Preface: If you don't care about the preface, skip down to the section marked "Question." Hi, Recently my company has upgraded to the latest version of WatiN for its test automation framework. We upgraded to avoid a problem where interacting with a select list would cause an ACCSES DENIED error. This error seems to be a product of the fact that our web application reloads the page it is on (which sits in a frame which sits in a frameset) with new fields after certain select lists options are selected. It could also be that our framework, which wraps around WatiN, often does actions on the same SelectList after the page refresh (I'm still looking into this, I'm new to the framework). The new version of WatiN does solve the ACCESS DENIED error, but also seems to stop select lists from firing the action that causes the page to reload w/ its new options. In fact, if you use WatiN to make the selection, the select list won't work correctly, even if manually interacted with, until the page has been forced to refresh. Question: When selecting an option in a SelectList using the newest WatiN code, the event that causes our web app's page to reload with new fields/values does not execute. What are some possibilities that could cause this? The term i've seen used most often to describe the refreshing that occurs when our select lists are used is "double post-back". Many thanks, Chad

    Read the article

  • JQgrid - Get Row Number instead of ID

    - by mariojjsimoes
    Hello, all, I am creating a JQGrid from a database table that does not contain a single field primary key. Therefore, the field i am supplying as id is not unique and the same one exists in several rows. Because of this, when passing a reference to the data with ondblClickRow to a function external to the grid i need to use the rownumber and not the id. To test, I'm using ondblClickRow: function(id){alert($("#grid1").getInd('rowid'));}, , and i should be getting and alert with the row number, except that it isn't working. I've been over the documentation and can't understand what i am doing wrong... Any help would be greatly appreciated! Thanks in advance, Mario. Bellow is my full grid: jQuery(document).ready(function(){ var mygrid = jQuery("#grid1").jqGrid({ datatype: 'xmlstring', datastr : grid1RsXML, width: 1024, height: 500, colNames:['DEVICE_ID','JOB_SIZE_IN_BYTES', 'USER_NAME','HOST_NAME','DAY_OF_WEEK','JOB_ID'], colModel:[ {name:'DEVICE_ID',index:'DEVICE_ID', width:55, sortable:true}, {name:'JOB_SIZE_IN_BYTES',index:'JOB_SIZE_IN_BYTES', width:40, sortable:true}, {name:'USER_NAME',index:'USER_NAME', width:60, sortable:true}, {name:'HOST_NAME',index:'HOST_NAME', width:50,align:"right", sortable:true}, {name:'DAY_OF_WEEK',index:'DAY_OF_WEEK', width:10, sortable:true}, {name:'JOB_ID',index:'JOB_ID', width:30, sortable:true} ], rowNum:1000, autowidth: true, //rowList:[10,20,30], rowList:[1], pager: '#grid1Pager', sortname: 'DEVICE_ID', viewrecords: true, rownumbers: true, sortorder: "desc", sortable: true, gridview : true, xmlReader: { root : "recordset", row: "record", repeatitems: false, id: "DEVICE_ID" }, caption:"All Jobs - Double Click for detailed history", ondblClickRow: function(id){alert($("#grid1").getInd('rowid'));}, toolbar: [true,"top"], url: grid1RsXML });

    Read the article

  • SSL certificate on IIS 7

    - by comii
    I am trying to install a SSL certificate on IIS 7. I have download a free trial certificate. After that, this is the steps what I do: Click the Start menu and select Administrative Tools. Start Internet Services Manager and click the Server Name. In the center section, double click on the Server Certificates button in the Security section. From the Actions menu click Complete Certificate Request. Enter the location for the certificate file. Enter a Friendly name. Click OK. Under Sites select the site to be secured with the SSL certificate. From the Actions menu, click Bindings.This will open the Site Bindings window. In the Site Bindings window, click Add. This opens the Add Site Binding window. Select https from the Type menu. Set the port to 443. Select the SSL Certificate you just installed from the SSL Certificate menu. Click OK. This is the step where I get the message: One or more intermediate certificates in the certificate chain are missing. To resolve this issue, make sure that all of intermediate certificates are installed. For more information, see http://support.microsoft.com/kb/954755 After this, when I access the web site on its first page, I get this message: There is a problem with this website's security certificate. What am I doing wrong?

    Read the article

  • AWS Amazon EC2 - password-less SSH login for non-root users using PEM keypairs

    - by Mark White
    We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good. I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly. So what security holes are there in this approach: New AWS keypair created for user.pem (not really called 'user') New user on instances: user Public key for user is in ~user/.ssh/authorized_keys (taken by creating new instance with user.pem, and copying it from /root/.ssh/authorized_keys) Private key for user is in ~user/.ssh/user.pem 'user' has login shell of /home/user/bin/rbash ~user/bin/ contains symbolic links to /bin/rbash and /usr/bin/sudo /etc/sudoers has entry "user ALL=(root) NOPASSWD: ~user/.bashrc sets PATH to /home/user/bin/ only ~user/.inputrc has 'set disable-completion on' to prevent double tabbing from 'sudo /' to find paths. ~user/ -R is owned by root with read-only access to user, except for ~user/.ssh which has write access for user (for writing known_hosts), and ~user/bin/* which are +x Inter-instance communication uses 'ssh -o StrictHostKeyChecking=no -i ~user/.ssh/user.pem user@ sudo ' Any thoughts would be welcome. Mark...

    Read the article

  • outofmemoryerror when running jar but not when running in netbeans/ apache poi

    - by Laughy
    I basically have a program that filters records from one excel file to another excel file using the apache poi. My program runs fine when it runs using netbeans. However, upon doing a clean and build and double clicking the .jar file inside the dist folder, it runs for very long( too long!) and gives me the following error( that I got by running the program from command prompt ). Is there any work around for it? I have already increase the heap size to be -Xms1500m inside netbeans before cleaning and building. Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space at org.apache.xmlbeans.impl.store.Saver$TextSaver.resize(Saver.java:1592) at org.apache.xmlbeans.impl.store.Saver$TextSaver.preEmit(Saver.java:1223) at org.apache.xmlbeans.impl.store.Saver$TextSaver.emit(Saver.java:1144) at org.apache.xmlbeans.impl.store.Saver$TextSaver.emitElement(Saver.java:926) at org.apache.xmlbeans.impl.store.Saver.processElement(Saver.java:456) at org.apache.xmlbeans.impl.store.Saver.process(Saver.java:307) at org.apache.xmlbeans.impl.store.Saver$TextSaver.saveToString(Saver.java:1727) at org.apache.xmlbeans.impl.store.Cursor._xmlText(Cursor.java:546) at org.apache.xmlbeans.impl.store.Cursor.xmlText(Cursor.java:2436) at org.apache.xmlbeans.impl.values.XmlObjectBase.xmlText(XmlObjectBase.java:1455) at org.apache.poi.xssf.model.SharedStringsTable.getKey(SharedStringsTable.java:130) at org.apache.poi.xssf.model.SharedStringsTable.addEntry(SharedStringsTable.java:176) at org.apache.poi.xssf.usermodel.XSSFCell.setCellType(XSSFCell.java:755) at equity.EquityFrame_Updated.copyRowsFromOldToNew(EquityFrame_Updated.java:646) at equity.EquityFrame_Updated.init(EquityFrame_Updated.java:133) at equity.EquityFrame_Updated.createAndShowGUI(EquityFrame_Updated.java:71) at equity.EquityFrame_Updated.<init>(EquityFrame_Updated.java:50) at equity.FileOpener.generateButtonPressed(FileOpener.java:160) at equity.FileOpener.access$100(FileOpener.java:17) at equity.FileOpener$2.actionPerformed(FileOpener.java:61) at javax.swing.AbstractButton.fireActionPerformed(Unknown Source) at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source) at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source) at javax.swing.DefaultButtonModel.setPressed(Unknown Source) at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Source) at java.awt.Component.processMouseEvent(Unknown Source) at javax.swing.JComponent.processMouseEvent(Unknown Source) at java.awt.Component.processEvent(Unknown Source) at java.awt.Container.processEvent(Unknown Source) at java.awt.Component.dispatchEventImpl(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source)

    Read the article

  • UITextField resignFirstResponder not working?

    - by Tejaswi Yerukalapudi
    I've double checked all the connections in the nib file. My code - // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { self.view.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"iphone_bg_login.png"]]; self.title = @"Login screen"; loginTxt = [[UITextField alloc] init]; pwdText = [[UITextField alloc] init]; loginFailedTxt = [[UILabel alloc] init]; loginBtn = [[UIButton alloc] init]; navAppDelegate = (NavAppDelegate *)[[UIApplication sharedApplication] delegate]; navAppDelegate.navController.navigationBarHidden = YES; //NSArray *subVs = (NSArray *) [self.view subviews]; [super viewDidLoad]; } I've used a subclass of UIView (UIControl) and added all the UI elements to it in the Interface builder.The UIControl's touchDown method is connected to backgroundTap method. -(IBAction) backgroundTap:(id) sender { [loginTxt resignFirstResponder]; [pwdText resignFirstResponder]; //[[UIApplication sharedApplication] becomeFirstResponder]; //[sender resignFirstResponder]; } So the keyboard isn't removed like it's supposed to. Not sure why. Thanks for the help! Teja.

    Read the article

  • NSString variable out of scope in sub-class (iPhone/Obj-C)

    - by Rich
    I am following along with an example in a book with the code exactly as it is in the example code as well as from the book, and I'm getting an error at runtime. I'll try to describe the life cycle of this variable as good as I can. I have a controller class with a nested array that is populated with string literals (NSArray of NSArrays, the nested NSArrays initialized with arrayWithObjects: where the objects are all string literals - @"some string"). I access these strings with a helper method added via a category on NSArray (to pull strings out of a nested array). My controller gets a reference to this string and assigns it to a NSString property on a child controller using dot notation. The code looks like this (nestedObjectAtIndexPath is my helper method): NSString *rowKey = [rowKeys nestedObjectAtIndexPath:indexPath]; controller.keypath = rowKey; keypath is a synthesized nonatomic, retain property defined in a based class. When I hit a breakpoint in the controller at the above code, the NSString's value is as expected. When I hit the next breakpoint inside the child controller, the object id of the keypath property is the same as before, but instead of showing me the value of the NSString, XCode says that the variable is "out of scope" which is also the error I see in the console. This also happens in another sub-class of the same parent. I tried googling, and I saw slightly similar cases where people were suggesting this had to do with retain counts. I was under the impression that by using dot notation on a synthesized property, my class would be using an "auto generated accessor" that would be increasing my retain count for me so that I wouldn't have this problem. Could there be any implications because I'm accessing it in a sub-class and the prop is defined in the parent? I don't see anything in the book's errata about this, but the book is relatively new (Apress - More iPhone 3 Dev). I also have double checked that my code matches the example 100 times.

    Read the article

  • Cannot debug views in MVC2 project, getting "The resource cannot be found" error

    - by schefdev
    I'm running Visual Studio 2008 sp1 on Win7, with MVC2 RTM installed. I created a new MVC2 project using the wizard and am unable to debug specific pages. With webforms and even MVC1, I was able to sit on a View page, hit F5, and then have the integrated web server in VS2008 start on the page I was working on. Very handy for building up app logic. When I try this now I get a "The resource cannot be found" error page. I retried this just now with a stock new MVC2 Web Application project. Here are the steps I took after creating the new project to reproduce: Open up project settings. Under the Web subtab, set the Start Action to "Current Page". Leave all the other settings as is. Open one of the views up (e.g. Account/Register.aspx) Hit F5 to debug the project Note that the browser window which displays shows the error message "The resource cannot be found". The link I saw in my browser for this run was: http://localhost:49471/Views/Account/Register.aspx I did some googling and found suggestions related to ensuring all HTTP server pieces were installed. I double checked and made sure that "HTTP Errors" and "HTTP Redirection" were both installed. If I leave the project setting as it was originally, set to "Specific Page" with nothing in the text box, then routing works and I always get the default home page. I'm hoping this isn't the only option. Thanks!

    Read the article

  • IOs7 app crashing when in background

    - by Leonardo
    My app sometimes crashes when in background and shows the following crash log: Nov 7 12:33:31 iPad backboardd[29] <Warning>: MyApp[3096] has active assertions beyond permitted time: {( <BKProcessAssertion: 0x14680c60> identifier: Called by MyApp, from -[AppDelegate applicationDidEnterBackground:] process: MyApp[3096] permittedBackgroundDuration: 180.000000 reason: finishTask owner pid:3096 preventSuspend preventIdleSleep preventSuspendOnSleep )} Looking through other questions I found out that the crash message indicates that I didn't end a task properly, so when its time expired the OS ended it and crashed my app. So I added some NSLogs: - (void)applicationDidEnterBackground:(UIApplication *)application { [self saveContext]; [Settings setCallLock:YES]; [self saveStuff]; if ([self isBackgroundTaskNeeded]) { UIApplication* app = [UIApplication sharedApplication]; bgTask = [app beginBackgroundTaskWithExpirationHandler:^{ [self pauseDownloads]; NSLog(@"Debug - Ending background task %d",bgTask); [app endBackgroundTask:bgTask]; NSLog(@"Debug - Background task %d ended",bgTask); bgTask = UIBackgroundTaskInvalid; }]; NSLog(@"Debug - Starting background task %d",bgTask); [self initBackground]; } } and found out that the task was called twice when it crashed: Nov 7 12:30:02 iPad MyApp[3096] <Warning>: Debug - Starting background task 7 Nov 7 12:30:30 iPad MyApp[3096] <Warning>: Debug - Starting background task 8 Nov 7 12:33:26 iPad MyApp[3096] <Warning>: Debug - Ending background task 8 Nov 7 12:33:26 iPad MyApp[3096] <Warning>: Debug - Background task 8 ended Nov 7 12:33:26 iPad MyApp[3096] <Warning>: Debug - Ending background task 0 Nov 7 12:33:26 iPad MyApp[3096] <Warning>: Debug - Background task 0 ended I wasn't able to tell when the double call happens and why. So question is why is the task being called twice and is there a way to avoid it? Edit: - (void) pauseDownloads { for (AFDownloadRequestOperation *operation in self.operationQueue.operations) { if(![operation isPaused]) { [operation pause]; } } }

    Read the article

  • Windows Azure : Storage Client Exception Unhandled

    - by veda
    I am writing a code for upload large files into the blobs using blocks... When I tested it, it gave me an StorageClientException It stated: One of the request inputs is out of range. I got this exception in this line of the code: blob.PutBlock(block, ms, null); Here is my code: protected void ButUploadBlocks_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; byte[] byteArray = uplFileUpload.FileBytes; Int64 contentLength = byteArray.Length; int numBytesPerBlock = 250 *1024; // 250KB per block int blocksCount = (int)Math.Ceiling((double)contentLength / numBytesPerBlock); // number of blocks MemoryStream ms ; List<string>BlockIds = new List<string>(); string block; int offset = 0; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; for (int i = 0; i < blocksCount; i++, offset = offset + numBytesPerBlock) { block = Convert.ToBase64String(BitConverter.GetBytes(i)); ms = new MemoryStream(); ms.Write(byteArray, offset, numBytesPerBlock); blob.PutBlock(block, ms, null); BlockIds.Add(block); } blob.PutBlockList(BlockIds); blob.Metadata["FILETYPE"] = "text"; } } Can anyone tell me how to solve it...

    Read the article

  • Calling Matlab's MLApp.MLAppClass.FEval from F#

    - by Matthew
    Matlab provides a COM interface that supports remote execution of arbitrary functions (and code snippets). In particular, it has an Feval method that calls a given Matlab function. The third parameter to this method, pvarArgOut, has COM type VARIANT*, and appears in the Visual Studio F# editor as an argument of type: pvarArgOut: byref<obj> The following code calls interp1, which in Matlab returns a matrix (i.e. 2D double array) result, as is normal for most Matlab functions. let matlab = new MLApp.MLAppClass() let vector_to_array2d (v : vector) = Array2D.init v.Length 1 (fun i j -> v.[i]) let interp1 (xs : vector) (ys : vector) (xi : vector) = let yi : obj = new obj() matlab.Feval("interp1", 1, ref yi, vector_to_array2d xs, vector_to_array2d ys, vector_to_array2d xi) yi :?> float[,] This code compiles fine, but when calling interp1, I get a COM exception: A first chance exception of type 'System.Reflection.TargetInvocationException' occurred in mscorlib.dll A first chance exception of type 'System.Runtime.InteropServices.COMException' occurred in mscorlib.dll An unhandled exception of type 'System.Runtime.InteropServices.COMException' occurred in mscorlib.dll Additional information: Invalid callee. (Exception from HRESULT: 0x80020010 (DISP_E_BADCALLEE)) I get the same error whether initialize yi with a new obj, a new Array2D, or null. How does F# translate VARIANT output arguments?

    Read the article

  • jerky animation using Core Animation

    - by Paul from Boston
    In my iPhone app, I am trying to get some red and white stripes that are scrolling across the screen to animate smoothly when the speed of the stripes gets high. In my app the user starts the animation and changes the scrolling speed by a finger swipe and changes the width of the stripes by a two finger pinch. Animation is stopped in response to a double tap. If the speed gets high or the stripes get narrow the animation is no longer smooth to the eye. The edges of the stripes seem to jump around. The animation is simple. I draw the stripes in a layer that's a bit larger than the screen. I then set up an animation that moves the layer position by exactly the distance from one red stripe to the next. The duration is set by the speed of the finger swipe and the repeat count is 1. When the animation stops the delegate checks a flag to see if the user wants to stop the scrolling. If not, the animation is restarted again for one cycle. Are there better ways of doing this so that the animation is smooth at high speeds or with narrow stripes? Thanks--

    Read the article

  • Error when adding code behind for Silverlight resource dictionary: AG_E_PARSER_BAD_TYPE

    - by rwwilden
    Hi, It should be possible to add a code behind file for a resource dictionary in Silverlight, but I keep getting the same error, thrown from the InitializeComponent method of my App.xaml constructor: XamlParseException: AG_E_PARSER_BAD_TYPE. The resource dictionary xaml file looks like this: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Celerior.Annapurna.SL.ProvisiorResourceDictionary" x:ClassModifier="public"> ... </ResourceDictionary> If I remove the x:Class attribute everything works fine again (of course, I double-checked the class name and it's correct). My App.xaml file isn't really exciting and just contains a reference to the resource dictionary: <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Celerior.Annapurna.SL.App"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="ProvisiorResourceDictionary.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> What am I doing wrong? Kind regards, Ronald Wildenberg

    Read the article

  • rand() generating the same number – even with srand(time(NULL)) in my main!

    - by Nick Sweet
    So, I'm trying to create a random vector (think geometry, not an expandable array), and every time I call my random vector function I get the same x value, thought y and z are different. int main () { srand ( (unsigned)time(NULL)); Vector<double> a; a.randvec(); cout << a << endl; return 0; } using the function //random Vector template <class T> void Vector<T>::randvec() { const int min=-10, max=10; int randx, randy, randz; const int bucket_size = RAND_MAX/(max-min); do randx = (rand()/bucket_size)+min; while (randx <= min && randx >= max); x = randx; do randy = (rand()/bucket_size)+min; while (randy <= min && randy >= max); y = randy; do randz = (rand()/bucket_size)+min; while (randz <= min && randz >= max); z = randz; } For some reason, randx will consistently return 8, whereas the other numbers seem to be following the (pseudo) randomness perfectly. However, if I put the call to define, say, randy before randx, randy will always return 8. Why is my first random number always 8? Am I seeding incorrectly?

    Read the article

  • [SOLVED] BitmapFactory.decodeStream returning null when options are set.

    - by Robert Foss
    Hi! I'm having issues with BitmapFactory.decodeStream(inputStream). When using it without options, it will return an image. But when I use it with options as in .decodeStream(inputStream, null, options) it never returns Bitmaps. What I'm trying to do is to downsample a Bitmap before I actually load it to save memory. I've read some good guides, but none using .decodeStream. Here httpIM:NOT//nornalbion.SPAMcom/blog/?p=143 httpIM:NOT//kfb-android.blogspot.SPAMcom/2009/04/image-processing-in-android.html WORKS JUST FINE URL url = new URL(sUrl); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); InputStream is = connection.getInputStream(); Bitmap img = BitmapFactory.decodeStream(is, null, options); DOESNT WORK InputStream is = connection.getInputStream(); Bitmap img = BitmapFactory.decodeStream(is, null, options); InputStream is = connection.getInputStream(); Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; BitmapFactory.decodeStream(is, null, options); Boolean scaleByHeight = Math.abs(options.outHeight - TARGET_HEIGHT) >= Math.abs(options.outWidth - TARGET_WIDTH); if(options.outHeight * options.outWidth * 2 >= 200*100*2){ // Load, scaling to smallest power of 2 that'll get it <= desired dimensions double sampleSize = scaleByHeight ? options.outHeight / TARGET_HEIGHT : options.outWidth / TARGET_WIDTH; options.inSampleSize = (int)Math.pow(2d, Math.floor( Math.log(sampleSize)/Math.log(2d))); } // Do the actual decoding options.inJustDecodeBounds = false; Bitmap img = BitmapFactory.decodeStream(is, null, options);

    Read the article

  • OCR with Neural network: data extraction

    - by Sebastian Hoitz
    I'm using the AForge library framework and its neural network. At the moment when I train my network I create lots of images (one image per letter per font) at a big size (30 pt), cut out the actual letter, scale this down to a smaller size (10x10 px) and then save it to my harddisk. I can then go and read all those images, creating my double[] arrays with data. At the moment I do this on a pixel basis. So once I have successfully trained my network I test the network and let it run on a sample image with the alphabet at different sizes (uppercase and lowercase). But the result is not really promising. I trained the network so that RunEpoch had an error of about 1.5 (so almost no error), but there are still some letters left that do not get identified correctly in my test image. Now my question is: Is this caused because I have a faulty learning method (pixelbased vs. the suggested use of receptors in this article: http://www.codeproject.com/KB/cs/neural_network_ocr.aspx - are there other methods I can use to extract the data for the network?) or can this happen because my segmentation-algorithm to extract the letters from the image to look at is bad? Does anyone have ideas on how to improve it?

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >