Search Results

Search found 21960 results on 879 pages for 'program termination'.

Page 822/879 | < Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >

  • Unable to connect to MS Access database through JDBC on Win 7 64-bit

    - by Ninad
    Hello. I've been trying to connect to a MS Access 2007 database through JDBC. My JDK is JDK 1.6u18 64-bit and OS is Windows 7 64-bit. But problem is I am unable to create a DSN using Windows\system32\odbcad32.exe because it doesn't show ODBC drivers for MS Access at all, it's only showing drivers for MS SQL Server. When tried to click on Configure for "MS Access Database" (which is an already created DSN, I guess), it first shows error message : "The setup routines for the Microsoft Access Drivers (*.mdb, *.accdb) ODBC Driver could not be found. Please reinstall the driver." And then another message : "Errors found! The specified DSN contains an architecture mismatch between the Driver and Application." I cannot reinstall the MDAC as it doesn't work with Windows 7 (which comes with its own WDAC). The odbcad32.exe in Windows\SysWOW64 does let me create a DSN for MS Access, it shows the drivers installed properly. However, when tried to connect to that DSN through a Java program, I get the following exception : java.sql.SQLException: [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(Unknown Source) at sun.jdbc.odbc.JdbcOdbcConnection.initialize(Unknown Source) at sun.jdbc.odbc.JdbcOdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at AccessTest.main(AccessTest.java:19) What might be the problem and what do I have to do to get it working? My OS as well as JDK are 64-bit. Can't I connect to a Access 2007 database, which I presume is 32-bit? Any help would be highly appreciated. Also, in case one thinks this's not a right place for this question, I apologize in advance. Then please guide me to appropriate forum. Another option would be to find a third-party JDBC driver for MS Access. But I do need to know what's wrong with my configuration. :-/ PS : I know there're many better databases available out there, but for few unfortunate reasons, I have to use MS Access only and have to get it working.

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Regular exp to validate email in C

    - by Liju Mathew
    Hi, We need to write a email validation program in C. We are planning to use GNU Cregex.h) regular expression. The regular expression we prepared is [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? But the below code is failing while compiling the regex. #include <stdio.h> #include <regex.h> int main(const char *argv, int argc) { const char *reg_exp = "[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?"; int status = 1; char email[71]; regex_t preg; int rc; printf("The regex = %s\n", reg_exp); rc = regcomp(&preg, reg_exp, REG_EXTENDED|REG_NOSUB); if (rc != 0) { if (rc == REG_BADPAT || rc == REG_ECOLLATE) fprintf(stderr, "Bad Regex/Collate\n"); if (rc == REG_ECTYPE) fprintf(stderr, "Invalid Char\n"); if (rc == REG_EESCAPE) fprintf(stderr, "Trailing \\\n"); if (rc == REG_ESUBREG || rc == REG_EBRACK) fprintf(stderr, "Invalid number/[] error\n"); if (rc == REG_EPAREN || rc == REG_EBRACE) fprintf(stderr, "Paren/Bracket error\n"); if (rc == REG_BADBR || rc == REG_ERANGE) fprintf(stderr, "{} content invalid/Invalid endpoint\n"); if (rc == REG_ESPACE) fprintf(stderr, "Memory error\n"); if (rc == REG_BADRPT) fprintf(stderr, "Invalid regex\n"); fprintf(stderr, "%s: Failed to compile the regular expression:%d\n", __func__, rc); return 1; } while (status) { fgets(email, sizeof(email), stdin); status = email[0]-48; rc = regexec(&preg, email, (size_t)0, NULL, 0); if (rc == 0) { fprintf(stderr, "%s: The regular expression is a match\n", __func__); } else { fprintf(stderr, "%s: The regular expression is not a match: %d\n", __func__, rc); } } regfree(&preg); return 0; } The regex compilation is failing with the below error. The regex = [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? Invalid regex main: Failed to compile the regular expression:13 What is the cause of this error? Whether the regex need to be modified? Thanks, Mathew Liju

    Read the article

  • Git sh.exe process forking issue on windows XP, slow?

    - by AndyL
    Git is essential to my workflow. I run MSYS Git on Windows XP on my quad core machine with 3GB of RAM, and normally it is responsive and zippy. Suddenly an issue has cropped up whereby it takes 30 seconds to run any command from the Git Bash command prompt, including ls or cd. Interestingly, from the bash prompt it looks likes ls runs fairly quickly, I can then see the output from ls, but it then takes ~30 seconds for the prompt to return. If I switch to the windows command prompt (by running cmd from the start menu) git related commands also take forever, even just to run. For example git status can take close to a minute before anything happens. Sometimes the processes simply don't finish. Note that I have "MSYS Git" installed as well as regular "MSYS" for things like MinGW and make. I believe the problem is related to sh.exe located in C:\Program Files\Git\bin. When I run ls from the bash prompt, or when I invoke git from the windows prompt, task manager shows up to four instances of sh.exe processes that come and go. Here I am waiting for ls to return and you can see the task manager has git.exe running and four instances of sh.exe: If I ctrl-c in the middle of an ls I sometimes get errors that include: sh.exe": fork: Resource temporarily unavailable 0 [main] sh.exe" 1624 proc_subproc: Couldn't duplicate my handle<0x6FC> fo r pid 6052, Win32 error 5 sh.exe": fork: Resource temporarily unavailable Or for git status: $ git status sh.exe": fork: Resource temporarily unavailable sh.exe": fork: Resource temporarily unavailable sh.exe": fork: Resource temporarily unavailable sh.exe": fork: Resource temporarily unavailable Can I fix this so that git runs quickly again, and if so how? Things I have tried: Reboot Upgrade MSYS Git to most recent version & Reboot Upgrade MSYS to most recent version & Reboot Uninstall MSYS & uninstall and reinstall MSYS Git alone & Reboot I'd very much like to not wipe my box and reinstall Windows, but I will if I can't get this fixed. I can no longer code if it takes me 30 s to run git status or cd.

    Read the article

  • stdio's remove() not always deleting on time.

    - by Kyte
    For a particular piece of homework, I'm implementing a basic data storage system using sequential files under standard C, which cannot load more than 1 record at a time. So, the basic part is creating a new file where the results of whatever we do with the original records are stored. The previous file's renamed, and a new one under the working name is created. The code's compiled with MinGW 5.1.6 on Windows 7. Problem is, this particular version of the code (I've got nearly-identical versions of this floating around my functions) doesn't always remove the old file, so the rename fails and hence the stored data gets wiped by the fopen(). FILE *archivo, *antiguo; remove("IndiceNecesidades.old"); // This randomly fails to work in time. rename("IndiceNecesidades.dat", "IndiceNecesidades.old"); // So rename() fails. antiguo = fopen("IndiceNecesidades.old", "rb"); // But apparently it still gets deleted, since this turns out null (and I never find the .old in my working folder after the program's done). archivo = fopen("IndiceNecesidades.dat", "wb"); // And here the data gets wiped. Basically, anytime the .old previously exists, there's a chance it's not removed in time for the rename() to take effect successfully. No possible name conflicts both internally and externally. The weird thing's that it's only with this particular file. Identical snippets except with the name changed to Necesidades.dat (which happen in 3 different functions) work perfectly fine. // I'm yet to see this snippet fail. FILE *antiguo, *archivo; remove("Necesidades.old"); rename("Necesidades.dat", "Necesidades.old"); antiguo = fopen("Necesidades.old", "rb"); archivo = fopen("Necesidades.dat", "wb"); Any ideas on why would this happen, and/or how can I ensure the remove() command has taken effect by the time rename() is executed? (I thought of just using a while loop to force call remove() again so long as fopen() returns a non-null pointer, but that sounds like begging for a crash due to overflowing the OS with delete requests or something.)

    Read the article

  • Why can't I render objects to texture properly using FBO?

    - by Brett
    Hello, I'm trying to implement a simple program using 3 FBO's to render a scene to a texture and display a textured quad. I've successfully done this previously using fragment shaders before projecting to the textured quad but can't get it to work without a shader. What could be wrong? First I set up my textures glGenTextures(3, renderTextureID); for (GLint i = 0; i < 3; i++) { glBindTexture(GL_TEXTURE_2D, renderTextureID[i]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // this may change with window size changes glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, fboWidth, fboHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); } Next some framebuffer state glGenFramebuffersEXT(3, framebufferID); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[0]); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, renderTextureID[0], 0); GLenum fboStatus = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); if(fboStatus != GL_FRAMEBUFFER_COMPLETE_EXT) { fprintf(stderr, "FBO Error!"); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[1]); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, renderTextureID[1], 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[2]); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, renderTextureID[2], 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); Then I render the scene glViewport(0, 0, fboWidth, fboHeight); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[0]); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); drawModels(); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glBindTexture(GL_TEXTURE_2D, renderTextureID[0]); glEnable(GL_SCISSOR_TEST); glViewport(windowWidth/2, 0, fboWidth, fboHeight); glScissor(windowWidth/2, 0, fboWidth, fboHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glBegin(GL_QUADS); glTexCoord2i(0, 0); glVertex2f(-1.0f, -1.0f); glTexCoord2i(1, 0); glVertex2f(1.0f, -1.0f); glTexCoord2i(1, 1); glVertex2f(1.0f, 1.0f); glTexCoord2i(0, 1); glVertex2f(-1.0f, 1.0f); glEnd(); glDisable(GL_SCISSOR_TEST); glutSwapBuffers(); The result is a yellow square. Draw models draws yellow wireframe cubes if not using the FBO so the problem is not that. I've read a previous similar post that talks about using glTexParameterf after generating and binding textures but i've done this. Thanks...

    Read the article

  • "RFC 2833 RTP Event" Consecutive Events and the E "End" Bit

    - by brian_d
    Hello, I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt When I do set the E "End" bit, but leave it as 0, I get the following behaviour: If for example keys 7874556332111111145855885#3 were pressed, then ALL events would be sent and show up in a program like wireshark, however only 87456321458585#3 would sound. So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound. In section 3.9, figure 2 of the above linked document, they give a 911 example. Here all but the last event have the E bit set. When I set the bit for all numbers, I never get an event to sound. I have thought of a couple possible thing but do not know if they are the reason: 1) figure 2 shows payload types of 96 and 97 sent. I have not nor know how to exactly. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively" 2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this? I have also fiddled around with timestamp intervals and the RTP marker. Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas: 6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end) Real-Time Transport Protocol Stream setup by SDP (frame 6225) Setup frame: 6225 Setup Method: SDP 10.. .... = Version: RFC 1889 Version (2) ..0. .... = Padding: False ...0 .... = Extension: False .... 0000 = Contributing source identifiers count: 0 0... .... = Marker: False Payload type: telephone-event (101) Sequence number: 0 Extended sequence number: 65536 Timestamp: 0 Synchronization Source identifier: 0x15f27104 (368210180) RFC 2833 RTP Event Event ID: DTMF Pound # (11) 1... .... = End of Event: True .0.. .... = Reserved: False ..00 0000 = Volume: 0 Event Duration: 2048

    Read the article

  • Index out of bounds error

    - by sprasad12
    Hello, I am working on a program where i am recreating the saved widgets back on to the boundary panel. When i am creating them i am also trying to put the values into the ArrayList so that if i want to update and save the opened project i should be able to do so by getting the values from the ArrayList. Here is how the code looks like: for(int i = 0; i < result.length; i++){ if(ename.contains(result[i].getParticipateEntityName())){ ername.add(ename.indexOf(result[i].getParticipateEntityName()), result[i].getParticipateRelatioshipName()); etotalpartial.add(ename.indexOf(result[i].getParticipateEntityName()), result[i].getTotalPartial()); }else if(wename.contains(result[i].getParticipateEntityName())){ wrname.add(wename.indexOf(result[i].getParticipateEntityName()), result[i].getParticipateRelatioshipName()); } } Here ename, ername, etotalpartial, wename and wrname are all ArrayList. This piece of code is included in an asynchronous class method. When i run the code i get error at "ername.add(ename......". Here is the error stack: java.lang.IndexOutOfBoundsException: Index: 1, Size: 0 at java.util.ArrayList.add(ArrayList.java:367) at com.e.r.d.client.ERD1$16.onSuccess(ERD1.java:898) at com.e.r.d.client.ERD1$16.onSuccess(ERD1.java:1) at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:216) at com.google.gwt.http.client.Request.fireOnResponseReceived(Request.java:287) at com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:393) at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.MethodAdaptor.invoke(MethodAdaptor.java:103) at com.google.gwt.dev.shell.MethodDispatch.invoke(MethodDispatch.java:71) at com.google.gwt.dev.shell.OophmSessionHandler.invoke(OophmSessionHandler.java:157) at com.google.gwt.dev.shell.BrowserChannel.reactToMessagesWhileWaitingForReturn(BrowserChannel.java:1713) at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:165) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:120) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:507) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:264) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91) at com.google.gwt.core.client.impl.Impl.apply(Impl.java) at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:188) at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.MethodAdaptor.invoke(MethodAdaptor.java:103) at com.google.gwt.dev.shell.MethodDispatch.invoke(MethodDispatch.java:71) at com.google.gwt.dev.shell.OophmSessionHandler.invoke(OophmSessionHandler.java:157) at com.google.gwt.dev.shell.BrowserChannel.reactToMessages(BrowserChannel.java:1668) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:401) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:619) I am not sure what i am doing wrong. Any input will be of great help. Thank you.

    Read the article

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • Datagrid using usercontrol

    - by klawusel
    Hello I am fighting with this problem: I have a usercontrol which contains a textbox and a button (the button calls some functions to set the textbox's text), here is the xaml: <UserControl x:Class="UcSelect" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="Control1Name" <Grid x:Name="grid1" MaxHeight="25"> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition Width="25"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="25"/> </Grid.RowDefinitions> <TextBox x:Name="txSelect" Text="{Binding UcText, Mode=TwoWay}" /> <Button x:Name="pbSelect" Background="Red" Grid.Column="1" Click="pbSelect_Click">...</Button> </Grid> And here the code behind: Partial Public Class UcSelect Private Shared Sub textChangedCallBack(ByVal [property] As DependencyObject, ByVal args As DependencyPropertyChangedEventArgs) Dim UcSelectBox As UcSelect = DirectCast([property], UcSelect) End Sub Public Property UcText() As String Get Return GetValue(UcTextProperty) End Get Set(ByVal value As String) SetValue(UcTextProperty, value) End Set End Property Public Shared ReadOnly UcTextProperty As DependencyProperty = _ DependencyProperty.Register("UcText", _ GetType(String), GetType(UcSelect), _ New FrameworkPropertyMetadata(String.Empty, FrameworkPropertyMetadataOptions.BindsTwoWayByDefault, New PropertyChangedCallback(AddressOf textChangedCallBack))) Public Sub New() InitializeComponent() grid1.DataContext = Me End Sub Private Sub pbSelect_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) 'just demo UcText = UcText + "!" End Sub End Class The UserControl works fine when used as a single control in this way: <local:UcSelect Grid.Row="1" x:Name="ucSingle1" UcText="{Binding FirstName, Mode=TwoWay}"/> Now I wanted to use the control in a custom datagrid column. As I like to have binding support I choosed to derive from DataGridtextColumn instead of using a DataGridTemplateColumn, here is the derived column class: Public Class DerivedColumn Inherits DataGridTextColumn Protected Overloads Overrides Function GenerateElement(ByVal oCell As DataGridCell, ByVal oDataItem As Object) As FrameworkElement Dim oElement = MyBase.GenerateElement(oCell, oDataItem) Return oElement End Function Protected Overloads Overrides Function GenerateEditingElement(ByVal oCell As DataGridCell, ByVal oDataItem As Object) As FrameworkElement Dim oUc As New UcSelect Dim oBinding As Binding = CType(Me.Binding, Binding) oUc.SetBinding(UcSelect.UcTextProperty, oBinding) Return oUc End Function End Class The column is used in xaml in the following way: <local:DerivedColumn Header="Usercontrol" Binding="{Binding FirstName, Mode=TwoWay}"></local:DerivedColumn> If I start my program all seems to be fine, but changes I make in the custom column are not reflected in the object (property "FirstName"), the changes are simply rolled back when leaving the cell. I think there must be something wrong with my GenerateEditingElement code, but have no idea ... Any Help would really be appreciated Regards Klaus

    Read the article

  • Is it possible to change HANDLE that has been opened for synchronous I/O to be opened for asynchrono

    - by Martin Dobšík
    Dear all, Most of my daily programming work in Windows is nowadays around I/O operations of all kind (pipes, consoles, files, sockets, ...). I am well aware of different methods of reading and writing from/to different kinds of handles (Synchronous, asynchronous waiting for completion on events, waiting on file HANDLEs, I/O completion ports, and alertable I/O). We use many of those. For some of our applications it would be very useful to have only one way to treat all handles. I mean, the program may not know, what kind of handle it has received and we would like to use, let's say, I/O completion ports for all. So first I would ask: Let's assume I have a handle: HANDLE h; which has been received by my process for I/O from somewhere. Is there any easy and reliable way to find out what flags it has been created with? The main flag in question is FILE_FLAG_OVERLAPPED. The only way known to me so far, is to try to register such handle into I/O completion port (using CreateIoCompletionPort()). If that succeeds the handle has been created with FILE_FLAG_OVERLAPPED. But then only I/O completion port must be used, as the handle can not be unregistered from it without closing the HANDLE h itself. Providing there is an easy a way to determine presence of FILE_FLAG_OVERLAPPED, there would come my second question: Is there any way how to add such flag to already existing handle? That would make a handle that has been originally open for synchronous operations to be open for asynchronous. Would there be a way how to create opposite (remove the FILE_FLAG_OVERLAPPED to create synchronous handle from asynchronous)? I have not found any direct way after reading through MSDN and googling a lot. Would there be at least some trick that could do the same? Like re-creating the handle in same way using CreateFile() function or something similar? Something even partly documented or not documented at all? The main place where I would need this, is to determine the the way (or change the way) process should read/write from handles sent to it by third party applications. We can not control how third party products create their handles. Dear Windows gurus: help please! With regards Martin

    Read the article

  • Silverlight DataGrid's sort by column doesn't update programmatically changed cells

    - by David Seiler
    For my first Silverlight app, I've written a program that sends user-supplied search strings to the Flickr REST API and displays the results in a DataGrid. Said grid is defined like so: <data:DataGrid x:Name="PhotoGrid" AutoGenerateColumns="False"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Photo Title" Binding="{Binding Title}" CanUserSort="True" CanUserReorder="True" CanUserResize="True" IsReadOnly="True" /> <data:DataGridTemplateColumn Header="Photo" SortMemberPath="ImageUrl"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" VerticalAlignment="Center"> <TextBlock Text="Click here to show image" MouseLeftButtonUp="ShowPhoto"/> <Image Visibility="Collapsed" MouseLeftButtonUp="HidePhoto"/> </StackPanel> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> It's a simple two-column table. The first column contains the title of the photo, while the second contains the text 'Click here to show image'. Clicks there call ShowPhoto(), which updates the Image element's Source property with a BitmapImage derived from the URI of the Flickr photo, and sets the image's visibility to Visible. Clicking on the image thus revealed hides it again. All of this was easy to implement and works perfectly. But whenever I click on one of the column headers to sort by that column, the cells that I've updated in this way do not change. The rest of the DataGrid is resorted and updated appropriately, but those cells remain behind, detached from the rest of their row. This is very strange, and not at all what I want. What am I doing wrong? Should I be freshening the DataGrid somehow in response to the sort event, and if so how? Or if I'm not supposed to be messing with the contents of the grid directly, what's the right way to get the behavior I want?

    Read the article

  • "EXC_BAD_ACCESS: Unable to restore previously selected frame" Error, Array size?

    - by Job
    Hi there, I have an algorithm for creating the sieve of Eratosthenes and pulling primes from it. It lets you enter a max value for the sieve and the algorithm gives you the primes below that value and stores these in a c-style array. Problem: Everything works fine with values up to 500.000, however when I enter a large value -while running- it gives me the following error message in xcode: Program received signal: “EXC_BAD_ACCESS”. warning: Unable to restore previously selected frame. Data Formatters temporarily unavailable, will re-try after a 'continue'. (Not safe to call dlopen at this time.) My first idea was that I didn't use large enough variables, but as I am using 'unsigned long long int', this should not be the problem. Also the debugger points me to a point in my code where a point in the array get assigned a value. Therefore I wonder is there a maximum limit to an array? If yes: should I use NSArray instead? If no, then what is causing this error based on this information? EDIT: This is what the code looks like (it's not complete, for it fails at the last line posted). I'm using garbage collection. /*--------------------------SET UP--------------------------*/ unsigned long long int upperLimit = 550000; // unsigned long long int sieve[upperLimit]; unsigned long long int primes[upperLimit]; unsigned long long int indexCEX; unsigned long long int primesCounter = 0; // Fill sieve with 2 to upperLimit for(unsigned long long int indexA = 0; indexA < upperLimit-1; ++indexA) { sieve[indexA] = indexA+2; } unsigned long long int prime = 2; /*-------------------------CHECK & FIND----------------------------*/ while(!((prime*prime) > upperLimit)) { //check off all multiples of prime for(unsigned long long int indexB = prime-2; indexB < upperLimit-1; ++indexB) { // Multiple of prime = 0 if(sieve[indexB] != 0) { if(sieve[indexB] % prime == 0) { sieve[indexB] = 0; } } } /*---------------- Search for next prime ---------------*/ // index of current prime + 1 unsigned long long int indexC = prime - 1; while(sieve[indexC] == 0) { ++indexC; } prime = sieve[indexC]; // Store prime in primes[] primes[primesCounter] = prime; // This is where the code fails if upperLimit > 500000 ++primesCounter; indexCEX = indexC + 1; } As you may or may not see, is that I am -very much- a beginner. Any other suggestions are welcome of course :)

    Read the article

  • Prototype JS swallows errors in dom:loaded, and ajax callbacks?

    - by WishCow
    I can't figure out why prototype suppressess the error messages in the dom:loaded event, and in AJAX handlers. Given the following piece of HTML: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Conforming XHTML 1.1 Template</title> <script type="text/javascript" src="prototype.js"></script> <script type="text/javascript"> document.observe('dom:loaded', function() { console.log('domready'); console.log(idontexist); }); </script> </head> <body> </body> </html> The domready event fires, I see the log in the console, but there is no indication of any errors whatsoever. If you move the console.log(idontexist); line out of the handler, you get the idontexist is not defined error in the console. I find it a little weird, that in other event handlers, like 'click', you get the error message, it seems that it's only the dom:loaded that has this problem. The same goes for AJAX handlers: new Ajax.Request('/', { method: 'get', onComplete: function(r) { console.log('xhr complete'); alert(youwontseeme); } }); You won't see any errors. This is with prototype.js 1.6.1, and I can't find any indication of this behavior in the docs, nor a way to enable error reporting in these handlers. I have tried stepping through the code with FireBug's debugger, and it seems to jump to a function on line 53 named K, when it encounters the missing variable in the dom:loaded handler: K: function(x) { return x } But how? Why? When? I can't see any try/catch block there, how does the program flow end up there? I know that I can make the errors visible by packing my dom:ready handler(s) in try/catch blocks, but that's not a very comfortable option. Same goes for registering a global onException handler for the AJAX calls. Why does it even suppress the errors? Did someone encounter this before?

    Read the article

  • Why is Perl's Math::Complex taking up so much time when I try acos(1)?

    - by synapz
    While trying to profile my Perl program, I find that Math::Complex is taking up a lot of time with what looks like some kind of warning. Also, my code shouldn't have any complex numbers being generated or used, so I am not sure what it is doing in Math::Complex, anyway. Here's the FastProf output for the most expensive lines: /usr/lib/perl5/5.8.8/Math/Complex.pm:182 1.55480 276232: _cannot_make("real part", $re) unless $re =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:310 1.01132 453641: sub cartesian {$_[0]->{c_dirty} ? /usr/lib/perl5/5.8.8/Math/Complex.pm:315 0.97497 562188: sub set_cartesian { $_[0]->{p_dirty}++; $_[0]->{c_dirty} = 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:189 0.86302 276232: return $self; /usr/lib/perl5/5.8.8/Math/Complex.pm:1332 0.85628 293660: $self->{display_format} = { %display_format }; /usr/lib/perl5/5.8.8/Math/Complex.pm:185 0.81529 276232: _cannot_make("imaginary part", $im) unless $im =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:1316 0.78749 293660: my %display_format = %DISPLAY_FORMAT; /usr/lib/perl5/5.8.8/Math/Complex.pm:1335 0.69534 293660: %{$self->{display_format}} : /usr/lib/perl5/5.8.8/Math/Complex.pm:186 0.66697 276232: $self->set_cartesian([$re, $im ]); /usr/lib/perl5/5.8.8/Math/Complex.pm:170 0.62790 276232: my $self = bless {}, shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:172 0.56733 276232: if (@_ == 0) { /usr/lib/perl5/5.8.8/Math/Complex.pm:316 0.53179 281094: $_[0]->{'cartesian'} = $_[1] } /usr/lib/perl5/5.8.8/Math/Complex.pm:1324 0.48768 293660: if (@_ == 1) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1319 0.44835 293660: if (exists $self->{display_format}) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1318 0.40355 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:187 0.39950 276232: $self->display_format('cartesian'); /usr/lib/perl5/5.8.8/Math/Complex.pm:1315 0.39312 293660: my $self = shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:1331 0.38087 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:184 0.35171 276232: $im ||= 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:181 0.34145 276232: if (defined $re) { /usr/lib/perl5/5.8.8/Math/Complex.pm:171 0.33492 276232: my ($re, $im); /usr/lib/perl5/5.8.8/Math/Complex.pm:390 0.20658 128280: my ($z1, $z2, $regular) = @_; /usr/lib/perl5/5.8.8/Math/Complex.pm:391 0.20631 128280: if ($z1->{p_dirty} == 0 and ref $z2 and $z2->{p_dirty} == 0) { Thanks for any help!

    Read the article

  • WPF / Silverlight Binding when setting DataTemplate programically

    - by Daniel
    I have my little designer tool (my program). On the left side I have TreeView and on the right site I have Accordion. When I select a node I want to dynamically build Accordion Items based on Properties from DataContext of selected node. Selecting nodes works fine, and when I use this sample code for testing it works also. XAML code: <layoutToolkit:Accordion x:Name="accPanel" SelectionMode="ZeroOrMore" SelectionSequence="Simultaneous"> <layoutToolkit:AccordionItem Header="Controller Info"> <StackPanel Orientation="Horizontal" DataContext="{Binding}"> <TextBlock Text="Content:" /> <TextBlock Text="{Binding Path=Name}" /> </StackPanel> </layoutToolkit:AccordionItem> </layoutToolkit:Accordion> C# code: private void treeSceneNode_SelectedItemChanged(object sender, RoutedPropertyChangedEventArgs<object> e) { if (e.NewValue != e.OldValue) { if (e.NewValue is SceneNode) { accPanel.DataContext = e.NewValue; //e.NewValue is a class that contains Name property } } } But the problem occurs when I'm trying to achive this using DateTemplate and dynamically build AccordingItem, the Binding is not working: <layoutToolkit:Accordion x:Name="accPanel" SelectionMode="ZeroOrMore" SelectionSequence="Simultaneous" /> and DataTemplate in my ResourceDictionary <DataTemplate x:Key="dtSceneNodeContent"> <StackPanel Orientation="Horizontal" DataContext="{Binding}"> <TextBlock Text="Content:" /> <TextBlock Text="{Binding Path=Name}" /> </StackPanel> </DataTemplate> and C# code: private void treeSceneNode_SelectedItemChanged(object sender, RoutedPropertyChangedEventArgs<object> e) { if (e.NewValue != e.OldValue) { ResourceDictionary rd = new ResourceDictionary(); rd.Source = new Uri("/SilverGL.GUI;component/SilverGLDesignerResourceDictionary.xaml", UriKind.RelativeOrAbsolute); if (e.NewValue is SceneNode) { accPanel.DataContext = e.NewValue; AccordionItem accController = new AccordionItem(); accController.Header = "Controller Info"; accController.ContentTemplate = rd["dtSceneNodeContent"] as DataTemplate; accPanel.Items.Add(accController); } else { // Other type of node } } } I really need help with this issue. Thanks for any support.

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • Help with OpenSSL request using Python

    - by Ldn
    Hi i'm creating a program that has to make a request and then obtain some info. For doing that the website had done some API that i will use. There is an how-to about these API but every example is made using PHP. But my app is done using Python so i need to convert the code. here is the how-to: The request string is sealed with OpenSSL. The steps for sealing are as follows: • Random 128-bit key is created. • Random key is used to RSA-RC4 symettrically encrypt the request string. • Random key is encrypted with the public key using OpenSSL RSA asymmetrical encryption. • The encrypted request and encrypted key are each base64 encoded and placed in the appropriate fields. In PHP a full request to our API can be accomplished like so: <?php // initial request. $request = array('object' => 'Link', 'action' => 'get', 'args' => array( 'app_id' => 303612602 ) ); // encode the request in JSON $request = json_encode($request); // when you receive your profile, you will be given a public key to seal your request in. $key_pem = "-----BEGIN PUBLIC KEY----- MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALdu5C6d2sA1Lu71NNGBEbLD6DjwhFQO VLdFAJf2rOH63rG/L78lrQjwMLZOeHEHqjaiUwCr8NVTcVrebu6ylIECAwEAAQ== -----END PUBLIC KEY-----"; // load the public key $pkey = openssl_pkey_get_public($key_pem); // seal! $newrequest and $enc_keys are passed by reference. openssl_seal($request, $enc_request, $enc_keys, array($pkey)); // then wrap the request $wrapper = array( 'profile' => 'ProfileName', 'format' => 'RSA_RC4_Sealed', 'enc_key' => base64_encode($enc_keys[0]), 'request' => base64_encode($enc_request) ); // json encode the wrapper. urlencode it as well. $wrapper = urlencode(json_encode($wrapper)); // we can send the request wrapper via the cURL extension $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://api.site.com/'); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, "request=$wrapper"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $data = curl_exec($ch); curl_close($ch); ?> Of all of that, i was able to convert "$request" and i'v also made the JSON encode. This is my code: import urllib import urllib2 import json url = 'http://api.site.com/' array = {'app_id' : "303612602"} values = { "object" : "Link", "action": "get", "args" : array } data = urllib.urlencode(values) json_data = json.dumps(data) What stop me is the sealing with OpenSSL and the publi key (that obviously i have) Using PHP OpenSSL it's so easy, but in Python i don't really know how to use it Please, help me!

    Read the article

  • What Causes Boost Asio to Crash Like This?

    - by Scott Lawson
    My program appears to run just fine most of the time, but occasionally I get a segmentation fault. boost version = 1.41.0 running on RHEL 4 compiled with GCC 3.4.6 Backtrace: #0 0x08138546 in boost::asio::detail::posix_fd_set_adapter::is_set (this=0xb74ed020, descriptor=-1) at /home/scottl/boost_1_41_0/boost/asio/detail/posix_fd_set_adapter.hpp:57 __result = -1 'ÿ' #1 0x0813e1b0 in boost::asio::detail::reactor_op_queue::perform_operations_for_descriptors (this=0x97f3b6c, descriptors=@0xb74ed020, result=@0xb74ecec8) at /home/scottl/boost_1_41_0/boost/asio/detail/reactor_op_queue.hpp:204 op_iter = {_M_node = 0xb4169aa0} i = {_M_node = 0x97f3b74} #2 0x081382ca in boost::asio::detail::select_reactor::run (this=0x97f3b08, block=true) at /home/scottl/boost_1_41_0/boost/asio/detail/select_reactor.hpp:388 read_fds = {fd_set_ = {fds_bits = {16, 0 }}, max_descriptor_ = 65} write_fds = {fd_set_ = {fds_bits = {0 }}, max_descriptor_ = -1} retval = 1 lock = { = {}, mutex_ = @0x97f3b1c, locked_ = true} except_fds = {fd_set_ = {fds_bits = {0 }}, max_descriptor_ = -1} max_fd = 65 tv_buf = {tv_sec = 0, tv_usec = 710000} tv = (timeval *) 0xb74ecf88 ec = {m_val = 0, m_cat = 0x81f2c24} sb = { = {}, blocked_ = true, old_mask_ = {__val = {0, 0, 134590223, 3075395548, 3075395548, 3075395464, 134729792, 3075395360, 135890240, 3075395368, 134593920, 3075395544, 135890240, 3075395384, 134599542, 3020998404, 135890240, 3075395400, 134614095, 3075395544, 4, 3075395416, 134548135, 3021172996, 4294967295, 3075395432, 134692921, 3075395504, 0, 3075395448, 134548107, 3021172992}}} #3 0x0812eb45 in boost::asio::detail::task_io_service ::do_one (this=0x97f3a70, lock=@0xb74ed230, this_idle_thread=0xb74ed240, ec=@0xb74ed2c0) at /home/scottl/boost_1_41_0/boost/asio/detail/task_io_service.hpp:260 more_handlers = false c = {lock_ = @0xb74ed230, task_io_service_ = @0x97f3a70} h = (boost::asio::detail::handler_queue::handler *) 0x97f3aa0 polling = false task_has_run = true #4 0x0812765f in boost::asio::detail::task_io_service ::run (this=0x97f3a70, ec=@0xb74ed2c0) at /home/scottl/boost_1_41_0/boost/asio/detail/task_io_service.hpp:103 ctx = { = {}, owner_ = 0x97f3a70, next_ = 0x0} this_idle_thread = {wakeup_event = { = {}, cond_ = {__c_lock = { __status = 0, __spinlock = 22446}, __c_waiting = 0x2bd7, __padding = "\000\000\000\000×+\000\000\000\000\000\000×+\000\000\000\000\000\000\204:\177\t\000\000\000", __align = 0}, signalled_ = true}, next = 0x0} lock = { = {}, mutex_ = @0x97f3a84, locked_ = false} n = 11420 #5 0x08125e99 in boost::asio::io_service::run (this=0x97ebbcc) at /home/scottl/boost_1_41_0/boost/asio/impl/io_service.ipp:58 ec = {m_val = 0, m_cat = 0x81f2c24} s = 8 #6 0x08154424 in boost::_mfi::mf0::operator() (this=0x9800870, p=0x97ebbcc) at /home/scottl/boost_1_41_0/boost/bind/mem_fn_template.hpp:49 No locals. #7 0x08154331 in boost::_bi::list1 ::operator(), boost::_bi::list0 (this=0x9800878, f=@0x9800870, a=@0xb74ed337) at /home/scottl/boost_1_41_0/boost/bind/bind.hpp:236 No locals. #8 0x081541e5 in boost::_bi::bind_t, boost::_bi::list1 ::operator() (this=0x9800870) at /home/scottl/boost_1_41_0/boost/bind/bind_template.hpp:20 a = {} #9 0x08154075 in boost::detail::thread_data, boost::_bi::list1 ::run (this=0x98007a0) at /home/scottl/boost_1_41_0/boost/thread/detail/thread.hpp:56 No locals. #10 0x0816fefd in thread_proxy () at /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/bits/locale_facets.tcc:2443 __ioinit = {static _S_refcount = , static _S_synced_with_stdio = } ---Type to continue, or q to quit--- typeinfo for common::RuntimeException = {} typeinfo name for common::RuntimeException = "N6common16RuntimeExceptionE" #11 0x00af23cc in start_thread () from /lib/tls/libpthread.so.0 No symbol table info available. #12 0x00a5c96e in __init_misc () from /lib/tls/libc.so.6 No symbol table info available.

    Read the article

  • emulator crashes

    - by Dave
    I am setting up an Android environment for the first time on Eclipse. I have many years of Eclipse experience, but new to Android. This is being done on an Apple Mac Mini, running MacOSX 10.6.3. I am using the latest Eclipse Classic, version 3.5.2. I am trying to get the tiny hello world program running. When I run it, I get the following in the console window of Eclipse: [2010-06-12 13:48:08 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'Android2.2AVD' [2010-06-12 13:48:08 - HelloAndroid] Launching a new emulator with Virtual Device 'Android2.2AVD' [2010-06-12 13:48:11 - HelloAndroid] New emulator found: emulator-5554 [2010-06-12 13:48:11 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2010-06-12 13:48:12 - Emulator] 2010-06-12 13:48:12.783 emulator[50495:903] Warning once: This application, or a library it uses, is using NSQuickDrawView, which has been deprecated. Apps should cease use of QuickDraw and move to Quartz. [2010-06-12 13:48:19 - HelloAndroid] emulator-5554 disconnected! Cancelling 'com.example.helloandroid.HelloAndroid activity launch'! The emulator crashes with the following info. I have followed all the instructions for running the hello world sample. Anyone have any ideas? Process: emulator [50398] Path: /Users/jeremy/android-sdk-mac_86/tools/emulator Identifier: emulator Version: ??? (???) Code Type: X86 (Native) Parent Process: eclipse [50388] Date/Time: 2010-06-12 13:28:38.595 -0400 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Interval Since Last Report: 363037 sec Crashes Since Last Report: 9 Per-App Crashes Since Last Report: 7 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000000007fd000 Crashed Thread: 4 Thread 0: Dispatch queue: com.apple.main-thread 0 emulator 0x000eed4e helper_set_cp15 + 30 Thread 1: 0 libSystem.B.dylib 0x9020bbd2 __workq_kernreturn + 10 1 libSystem.B.dylib 0x9020c168 _pthread_wqthread + 941 2 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 2: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x9020cb42 kevent + 10 1 libSystem.B.dylib 0x9020d25c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x9020c719 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x9020c4be _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x9020bf41 _pthread_wqthread + 390 5 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 3: 0 libSystem.B.dylib 0x901e635a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x90213ea1 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x90242a28 pthread_cond_timedwait_relative_np + 47 3 com.apple.audio.CoreAudio 0x9056f965 CAGuard::WaitFor(unsigned long long) + 219 4 com.apple.audio.CoreAudio 0x90572997 CAGuard::WaitUntil(unsigned long long) + 289 5 com.apple.audio.CoreAudio 0x90570294 HP_IOThread::WorkLoop() + 1892 6 com.apple.audio.CoreAudio 0x9056fb2b HP_IOThread::ThreadEntry(HP_IOThread*) + 17 7 com.apple.audio.CoreAudio 0x9056fa42 CAPThread::Entry(CAPThread*) + 140 8 libSystem.B.dylib 0x90213a19 _pthread_start + 345 9 libSystem.B.dylib 0x9021389e thread_start + 34 Thread 4 Crashed: 0 emulator 0x00040380 audioInDeviceIOProc + 96 Thread 4 crashed with X86 Thread State (32-bit): eax: 0x00000000 ebx: 0x007fd000 ecx: 0x000001fe edx: 0x0198f3f0 edi: 0x00000200 esi: 0x01119850 ebp: 0x01119800 esp: 0xb020fad0 ss: 0x0000001f efl: 0x00010212 eip: 0x00040380 cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x007fd000

    Read the article

  • MBR Booting from DOS

    - by eflukx
    For a project I would like to invoke the MBR on the first harddisk directly from DOS. I've written a small assembler program that loads the MBR in memory at 0:7c00h an does a far jump to it. I've put my util on a bootable floppy. The disk (HD0, 0x80) i'm trying to boot has a TrueCrypt boot loader on it. It shows up the TrueCrypt screen, but after typing in the password it crashes the system. When I run my little utlility (w00t.com) on a normal WinXP machine it seams to crash immedealty. Apparently I'm forgetting some crucial stuff the BIOS normally does, my guess is it's something trivial. Can someone with better bare-metal DOS and BIOS experience help me out? Heres my code: .MODEL tiny .386 _TEXT SEGMENT USE16 INCLUDE BootDefs.i ORG 100h start: ; http://vxheavens.com/lib/vbw05.html ; Before DOS has booted the BIOS stores the amount of usable lower memory ; in a word located at 0:413h in memory. We going to erase this value because ; we have booted dos before loading the bootsector, and dos is fat (and ugly). ; fake free memory ;push ds ;push 0 ;pop ds ;mov ax, TC_BOOT_LOADER_SEGMENT / 1024 * 16 + TC_BOOT_MEMORY_REQUIRED ;mov word ptr ds:[413h], ax ;ax = memory in K ;pop ds ;lea si, memory_patched_msg ;call print ;mov ax, cs mov ax, 0 mov es, ax ; read first sector to es:7c00h (== cs:7c00) mov dl, 80h mov cl, 1 mov al, 1 mov bx, 7c00h ;load sector to es:bx call read_sectors lea si, mbr_loaded_msg call print lea si, jmp_to_mbr_msg call print ;Set BIOS default values in environment cli mov dl, 80h ;(drive C) xor ax, ax mov ds, ax mov es, ax mov ss, ax mov sp, 0ffffh sti push es push 7c00h retf ;Jump to MBR code at 0:7c00h ; Print string print: xor bx, bx mov ah, 0eh cld @@: lodsb test al, al jz print_end int 10h jmp @B print_end: ret ; Read sectors of the first cylinder read_sectors: mov ch, 0 ; Cylinder mov dh, 0 ; Head ; DL = drive number passed from BIOS mov ah, 2 int 13h jnc read_ok lea si, disk_error_msg call print read_ok: ret memory_patched_msg db 'Memory patched', 13, 10, 7, 0 mbr_loaded_msg db 'MBR loaded', 13, 10, 7, 0 jmp_to_mbr_msg db 'Jumping to MBR code', 13, 10, 7, 0 disk_error_msg db 'Disk error', 13, 10, 7, 0 _TEXT ENDS END start

    Read the article

  • Floating point vs integer calculations on modern hardware

    - by maxpenguin
    I am doing some performance critical work in C++, and we are currently using integer calculations for problems that are inherently floating point because "its faster". This causes a whole lot of annoying problems and adds a lot of annoying code. Now, I remember reading about how floating point calculations were so slow approximately circa the 386 days, where I believe (IIRC) that there was an optional co-proccessor. But surely nowadays with exponentially more complex and powerful CPUs it makes no difference in "speed" if doing floating point or integer calculation? Especially since the actual calculation time is tiny compared to something like causing a pipeline stall or fetching something from main memory? I know the correct answer is to benchmark on the target hardware, what would be a good way to test this? I wrote two tiny C++ programs and compared their run time with "time" on Linux, but the actual run time is too variable (doesn't help I am running on a virtual server). Short of spending my entire day running hundreds of benchmarks, making graphs etc. is there something I can do to get a reasonable test of the relative speed? Any ideas or thoughts? Am I completely wrong? The programs I used as follows, they are not identical by any means: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { int accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += rand( ) % 365; } std::cout << accum << std::endl; return 0; } Program 2: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { float accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += (float)( rand( ) % 365 ); } std::cout << accum << std::endl; return 0; } Thanks in advance!

    Read the article

  • Consuming Web Services requiring Authentication from behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about Proxy Authentication, but none that seams to address this problem. I'm building a SharePoint Web Service consuming desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. The normal -dhttp.proxyHost ... settings does not seam to help any. But I have found that by creating a ProxySelector class and setting it as default, I can regain access to the authentication web service, but I still can't retrieve the list of web sites from the SharePoint server. It's almost as if the authentication I provide is going to the proxy rather than the SharePoint server. Anyone have any experience on how to make this work? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text When running the code from a network behind a proxy server, I successfully retrieve the Authentication mode from the server, but the request for the Web Site list generates an exception originating at: com.sun.xml.internal.ws.transport.http.client .HttpClientTransport.readResponseCodeAndMessage(HttpClientTransport.java:201) The output from the source when no proxy is on the network is listed below: Successfully retrieved the SharePoint WebService response for Authentication SharePoint authentication method is: WINDOWS Calling Web Service to retrieve list of web site. Web Service call response: -------------- XML START -------------- <Webs xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Web Title="Collaboration Lab" Url="http://host.domain.com/collaboration"/> <Web Title="Global Data Lists" Url="http://host.domain.com/global_data_lists"/> <Web Title="Landing" Url="http://host.domain.com/Landing"/> <Web Title="SharePoint HelpDesk" Url="http://host.domain.com/helpdesk"/> <Web Title="Program Management" Url="http://host.domain.com/programmanagement"/> <Web Title="Project Site" Url="http://host.domain.com/Project Site"/> <Web Title="SharePoint Administration Tools" Url="http://host.domain.com/admin"/> <Web Title="Space Management Project" Url="http://host.domain.com/spacemgmt"/> </Webs> -------------- XML END -------------- Br Jan

    Read the article

  • WCF Self Host Service - Endpoints in C#

    - by Kyle
    My first few attempts at creating a self hosted service. Trying to make something up which will accept a query string and return some text but have have a few issues: All the documentation talks about endpoints being created automatically for each base address if they are not found in a config file. This doesn't seem to be the case for me, I get the "Service has zero application endpoints..." exception. Manually specifying a base endpoint as below seems to resolve this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.ServiceModel.Description; namespace TestService { [ServiceContract] public interface IHelloWorldService { [OperationContract] string SayHello(string name); } public class HelloWorldService : IHelloWorldService { public string SayHello(string name) { return string.Format("Hello, {0}", name); } } class Program { static void Main(string[] args) { string baseaddr = "http://localhost:8080/HelloWorldService/"; Uri baseAddress = new Uri(baseaddr); // Create the ServiceHost. using (ServiceHost host = new ServiceHost(typeof(HelloWorldService), baseAddress)) { // Enable metadata publishing. ServiceMetadataBehavior smb = new ServiceMetadataBehavior(); smb.HttpGetEnabled = true; smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15; host.Description.Behaviors.Add(smb); host.AddServiceEndpoint(typeof(IHelloWorldService), new BasicHttpBinding(), baseaddr + "SayHello"); //for some reason a default endpoint does not get created here host.Open(); Console.WriteLine("The service is ready at {0}", baseAddress); Console.WriteLine("Press to stop the service."); Console.ReadLine(); // Close the ServiceHost. host.Close(); } } } } I still think I'm doing something wrong as I don't get the normal "This is a web service...etc..." page when I load up the url How would I go about setting this up to return the value of name in SayHello(string name) when requested thusly: localhost:8080/HelloWorldService/SayHello?name=kyle Do I have to create an endpoing for the SayHello contract as well? I'm trying to walk before running, but this just seems like crawling...Service has zero application endpoints...

    Read the article

  • Register all GUI components as Observers or pass current object to next object as a constructor argu

    - by Jack
    First, I'd like to say that I think this is a common issue and there may be a simple or common solution that I am unaware of. Many have probably encountered a similar problem. Thanks for reading. I am creating a GUI where each component needs to communicate (or at least be updated) by multiple other components. Currently, I'm using a Singleton class to accomplish this goal. Each GUI component gets the instance of the singleton and registers itself. When updates need to be made, the singleton can call public methods in the registered class. I think this is similar to an Observer pattern, but the singleton has more control. Currently, the program is set up something like this: class c1 { CommClass cc; c1() { cc = CommClass.getCommClass(); cc.registerC1( this ); C2 c2 = new c2(); } } class c2 { CommClass cc; c2() { cc = CommClass.getCommClass(); cc.registerC2( this ); C3 c3 = new c3(); } } class c3 { CommClass cc; c3() { cc = CommClass.getCommClass(); cc.registerC3( this ); C4 c4 = new c4(); } } etc. Unfortunately, the singleton class keeps growing larger as more communication is required between the components. I was wondering if it's a good idea to instead of using this singleton, pass the higher order GUI components as arguments in the constructors of each GUI component: class c1 { c1() { C2 c2 = new c2( this ); } } class c2 { C1 c1; c2( C1 c1 ) { this.c1 = c1 C3 c3 = new c3( c1, this ); } } class c3 { C1 c1; C2 c2; c3( C1 c1, C2 c2 ) { this.c1 = c1; this.c2 = c2; C4 c4 = new c4( c1, c2, this ); } } etc. The second version relies less on the CommClass, but it's still very messy as the private member variables increase in number and the constructors grow in length. Each class contains GUI components that need to communicate through CommClass, but I can't think of a good way to do it. If this seems strange or horribly inefficient, please describe some method of communication between classes that will continue to work as the project grows. Also, if this doesn't make any sense to anyone, I'll try to give actual code snippets in the future and think of a better way to ask the question. Thanks.

    Read the article

< Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >