Search Results

Search found 3121 results on 125 pages for 'leaving employee'.

Page 115/125 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Cross-table linq query with EF4/POCO

    - by Basiclife
    Hi All, I'm new to EF(any version) and POCO. I'm trying to use POCO entities with a generic repository in a "code-first" mode(?) I've got some POCO Entities (no proxies, no lazy loading, nothing). I have a repository(of T as Entity) which provides me with basic get/getsingle/getfirst functionality which takes a lambda as a parameter (specifically a System.Func(Of T, Boolean)) Now as I'm returning the simplest possible POCO object, none of the relationship parameters work once they've been retrieved from the database (as I would expect). However, I had assumed (wrongly) that my lambda query passed to the repository would be able to use the links between entities as it would be executed against the DB before the simple POCO entities are generated. The flow is: GUI calls: Public Function GetAllTypesForCategory(ByVal CategoryID As Guid) As IEnumerable(Of ItemType) Return ItemTypeRepository.Get(Function(x) x.Category.ID = CategoryID) End Function Get is defined in Repository(of T as Entity): Public Function [Get](ByVal Query As System.Func(Of T, Boolean)) As IEnumerable(Of T) Implements Interfaces.IRepository(Of T).Get Return ObjectSet.Where(Query).ToList() End Function The code doesn't error when this method is called but does when I try to use the result set. (This seems to be a lazy loading behaviour so I tried adding the .ToList() to force eager loading - no difference) I'm using unity/IOC to wire it all up but I believe that's irrelevant to the issue I'm having NB: Relationships between entities are being configured properly and if I turn on proxies/lazy loading/etc... this all just works. I'm intentionally leaving all that turned off as some calls to the BL will be from a website but some will be via WCF - So I want the simplest possible objects. Also, I don't want a change in an object passed to the UI to be committed to the DB if another BL method calls Commit() Can someone please either point out how to make this work or explain why it's not possible? All I want to do is make sure the lambda I pass in is performed against the DB before the results are returned Many thanks. In case it matters, the container is being populated with everything as shown below: Container.AddNewExtension(Of EFRepositoryExtension)() Container.Configure(Of IEFRepositoryExtension)(). WithConnection(ConnectionString). WithContextLifetime(New HttpContextLifetimeManager(Of IObjectContext)()). ConfigureEntity(New CategoryConfig(), "Categories"). ConfigureEntity(New ItemConfig()). ... )

    Read the article

  • Managing database connections in an Android Activity

    - by Daniel Lew
    I have an application with a ListActivity that uses a CursorAdapter as its adapter. The ListActivity opens the database and does the querying for the CursorAdapter, which is all well and good, but I am having issues with figuring out when to close both the Cursor and the SQLiteDatabase. The way things are handled right now, if the user finishes the activity, I close the database and the cursor. However, this still ends up with the DalvikVM warning me that I've left a database open - for example, if the user hits the "home" button (leaving the activity in the task's stack), rather than the "back" button. If I close them during pause and then re-query during resume, then I don't get any errors, but then a user cannot return to the list without it requerying (and thus losing the user's place in the list). By this I mean, the user can click on any item in the list and open a new activity based on it, but will often want to hit "back" afterwards and return to the same place on the list. If I requery, then I cannot return the user back to the correct spot. What is the proper way to handle this issue? I want the list to remain scrolled properly, but I don't want the VM to keep complaining about unclosed databases. Edit: Here's a general outline of how I handle the code at the moment: public class MyListActivity extends ListActivity { private Cursor mCursor; private CursorAdapter mAdapter; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mAdapter = new MyCursorAdapter(this); setListAdapter(mAdapter); } protected void onPause() { super.onPause(); if (isFinishing()) { mCursor.close(); } } protected void onDestroy() { super.onDestroy(); mCursor.close(); } private void updateQuery() { // If we had a cursor open before, close it. if (mCursor != null) { mCursor.close(); } MyDbHelper dbHelper = new MyDbHelper(this); SQLiteDatabase db = dbHelper.getReadableDatabase(); mCursor = db.query(...); mAdapter.changeCursor(mCursor); db.close(); } } updateQuery() can be called multiple times because the user can filter the results via menu items (I left this part out of the code, as the problem still occurs even if the user does no filtering). Again, the issue is that when I hit home I get leak errors. Yet, after going home, I can go back to the app and find my list again - cursor fully intact.

    Read the article

  • Redhat | Openssl installation error

    - by MMRUser
    make -f objs/Makefile make[1]: Entering directory `/root/fuse-ssh/nginx-0.7.65' cd /usr/bin/openssl \ && make clean \ && ./config --prefix=/usr/bin/openssl/.openssl no-shared no-threads \ && make \ && make install /bin/sh: line 0: cd: /usr/bin/openssl: Not a directory make[1]: *** [/usr/bin/openssl/.openssl/include/openssl/ssl.h] Error 1 make[1]: Leaving directory `/root/fuse-ssh/nginx-0.7.65' make: *** [build] Error 2 where's the actual location of openssl, there are several different places in my system.. How to solve this issue. rpm -ql openssl /usr/bin/openssl /usr/lib64/openssl /usr/lib64/openssl/engines /usr/lib64/openssl/engines/lib4758cca.so /usr/lib64/openssl/engines/libaep.so /usr/lib64/openssl/engines/libatalla.so /usr/lib64/openssl/engines/libchil.so /usr/lib64/openssl/engines/libcswift.so /usr/lib64/openssl/engines/libgmp.so /usr/lib64/openssl/engines/libnuron.so /usr/lib64/openssl/engines/libsureware.so /usr/lib64/openssl/engines/libubsec.so /usr/share/doc/openssl-0.9.8e /usr/share/doc/openssl-0.9.8e/CHANGES /usr/share/doc/openssl-0.9.8e/FAQ /usr/share/doc/openssl-0.9.8e/INSTALL /usr/share/doc/openssl-0.9.8e/LICENSE /usr/share/doc/openssl-0.9.8e/NEWS /usr/share/doc/openssl-0.9.8e/README /usr/share/doc/openssl-0.9.8e/README.FIPS /usr/share/doc/openssl-0.9.8e/c-indentation.el /usr/share/doc/openssl-0.9.8e/openssl.txt /usr/share/doc/openssl-0.9.8e/openssl_button.gif /usr/share/doc/openssl-0.9.8e/openssl_button.html /usr/share/doc/openssl-0.9.8e/ssleay.txt /usr/bin/openssl /usr/lib/openssl /usr/lib/openssl/engines /usr/lib/openssl/engines/lib4758cca.so /usr/lib/openssl/engines/libaep.so /usr/lib/openssl/engines/libatalla.so /usr/lib/openssl/engines/libchil.so /usr/lib/openssl/engines/libcswift.so /usr/lib/openssl/engines/libgmp.so /usr/lib/openssl/engines/libnuron.so /usr/lib/openssl/engines/libsureware.so /usr/lib/openssl/engines/libubsec.so /usr/share/doc/openssl-0.9.8e /usr/share/doc/openssl-0.9.8e/CHANGES /usr/share/doc/openssl-0.9.8e/FAQ /usr/share/doc/openssl-0.9.8e/INSTALL /usr/share/doc/openssl-0.9.8e/LICENSE /usr/share/doc/openssl-0.9.8e/NEWS /usr/share/doc/openssl-0.9.8e/README /usr/share/doc/openssl-0.9.8e/README.FIPS /usr/share/doc/openssl-0.9.8e/c-indentation.el /usr/share/doc/openssl-0.9.8e/openssl.txt /usr/share/doc/openssl-0.9.8e/openssl_button.gif /usr/share/doc/openssl-0.9.8e/openssl_button.html /usr/share/doc/openssl-0.9.8e/ssleay.txt These are the places.. Thanks.

    Read the article

  • SIFR 3.0 - Font Size

    - by Nick
    I have been working with SIFR 3.0 for some time now and the font-size never seems to work correctly. I understand the most basic concepts behind SIFR. SIFR runs when you load the page. It does some calculations one the size of the HTML rendered font and then replaces it with a flash movie that is roughly equal to that size. Because of this, you want to style your HTML font to match the size of your SIFR font as close as possible. My problem always comes up when trying to style these two font sizes to match. Let's say I want to use a SIFR font of Helvetica Neue Lt at about 32px. The HTML equivalent is something like Arial Narrow at about 36px with some negative letter spacing. So here is what I do. In sifr.css I'll write: @media screen { .sIFR-active h1 { visibility: hidden; z-index: 0 !important; font-size: 36px; } } Great, that gets the default HTML font the size I need. Now I need to update the flash SIFR font size. So I go into sifr-config.js and write something like this: sIFR.replace(HelveticaNeueThinCond, { selector: 'h1', css: '.sIFR-root { color: #762123; font-size: 32px; line-height: 1em; }', transparent: true }); So right now everything is working great. That is until my h1 text wraps more than one line. For some reason, when the text wraps it only shows the first line. It seems calculates the height wrong. This is very weird because I ran some tests. I took "visibility: hidden" off of "sIFR-active h1" to make sure that the HTML rendered text was the right size. It is, it takes up two lines. However, when the Flash replaces this text it gives it a min-height of one line of text. Odd. The only way I could find to fix this wrapping problem was to remove "font-size: 32px;" from "sIFR.replace(HelveticaNeueThinCond" in sifr-config. The problem I run into then is that it inherits the font-size set in sifr.css. Now the problem is that my HTML text is bigger then the SIFR text. So occasional my HTML text will wrap to a new line before my SIFR text leaving a big white space. So, how do I set two different font-size (one for my HTML text and one for my SIFR) without losing the wrapping. The only time I have been able to use the successfully is when I have a SIFR font that is so similar to a web safe font that they can share the same font-size attribute. Thanks

    Read the article

  • How much abstraction is too much?

    - by Daniel Bingham
    In an Object Oriented Program: How much abstraction is too much? How much is just right? I have always been a nuts and bolts kind of guy. I understood the concept behind high levels of encapsulation and abstraction, but always felt instinctively that adding too much would just confuse the program. I always tried to shoot for an amount of abstraction that left no empty classes or layers. And where in doubt, instead of adding a new layer to the hierarchy, I would try and fit something into the existing layers. However, recently I've been encountering more highly abstracted systems. Systems where everything that could require a representation later in the hierarchy gets one up front. This leads to a lot of empty layers, which at first seems like bad design. However, on second thought I've come to realize that leaving those empty layers gives you more places to hook into in the future with out much refactoring. It leaves you greater ability to add new functionality on top of the old with out doing nearly as much work to adjust the old. The two risks of this seem to be that you could get the layers you need wrong. In this case one would wind up still needing to do substantial refactoring to extend the code and would still have a ton of never used layers. But depending on how much time you spend coming up with the initial abstractions, the chance of screwing it up, and the time that could be saved later if you get it right - it may still be worth it to try. The other risk I can think of is the risk of over doing it and never needing all the extra layers. But is that really so bad? Are extra class layers really so expensive that it is much of a loss if they are never used? The biggest expense and loss here would be time that is lost up front coming up with the layers. But much of that time still might be saved later when one can work with the abstracted code rather than more low level code. So when is it too much? At what point do the empty layers and extra "might need" abstractions become overkill? How little is too little? Where's the sweet spot? Are there any dependable rules of thumb you've found in the course of your career that help you judge the amount of abstraction needed?

    Read the article

  • Using sem_t in a Qt Project

    - by thauburger
    Hi everyone, I'm working on a simulation in Qt (C++), and would like to make use of a Semaphore wrapper class I made for the sem_t type. Although I am including semaphore.h in my wrapper class, running qmake provides the following error: 'sem_t does not name a type' I believe this is a library/linking error, since I can compile the class without problems from the command line. I've read that you can specify external libraries to include during compilation. However, I'm a) not sure how to do this in the project file, and b) not sure which library to include in order to access semaphore.h. Any help would be greatly appreciated. Thanks, Tom Here's the wrapper class for reference: Semaphore.h #ifndef SEMAPHORE_H #define SEMAPHORE_H #include <semaphore.h> class Semaphore { public: Semaphore(int initialValue = 1); int getValue(); void wait(); void post(); private: sem_t mSemaphore; }; #endif Semaphore.cpp #include "Semaphore.h" Semaphore::Semaphore(int initialValue) { sem_init(&mSemaphore, 0, initialValue); } int Semaphore::getValue() { int value; sem_getvalue(&mSemaphore, &value); return value; } void Semaphore::wait() { sem_wait(&mSemaphore); } void Semaphore::post() { sem_post(&mSemaphore); } And, the QT Project File: TARGET = RestaurantSimulation TEMPLATE = app QT += SOURCES += main.cpp \ RestaurantGUI.cpp \ RestaurantSetup.cpp \ WidgetManager.cpp \ RestaurantView.cpp \ Table.cpp \ GUIFood.cpp \ GUIItem.cpp \ GUICustomer.cpp \ GUIWaiter.cpp \ Semaphore.cpp HEADERS += RestaurantGUI.h \ RestaurantSetup.h \ WidgetManager.h \ RestaurantView.h \ Table.h \ GUIFood.h \ GUIItem.h \ GUICustomer.h \ GUIWaiter.h \ Semaphore.h FORMS += RestaurantSetup.ui LIBS += Full Compiler Output: g++ -c -pipe -g -gdwarf-2 -arch i386 -Wall -W -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED - I/usr/local/Qt4.6/mkspecs/macx-g++ -I. - I/Library/Frameworks/QtCore.framework/Versions/4/Headers -I/usr/include/QtCore - I/Library/Frameworks/QtGui.framework/Versions/4/Headers -I/usr/include/QtGui - I/usr/include -I. -I. -F/Library/Frameworks -o main.o main.cpp In file included from RestaurantGUI.h:10, from main.cpp:2: Semaphore.h:14: error: 'sem_t' does not name a type make: *** [main.o] Error 1 make: Leaving directory `/Users/thauburger/Desktop/RestaurantSimulation' Exited with code 2. Error while building project RestaurantSimulation When executing build step 'Make'

    Read the article

  • Get the string representation of a jquery DOM object's entire HTML

    - by Scozzard
    Hi, I have had a bit of a look around and am having some difficulty solving a wee issue I am having. I basically have a string of HTML, I convert that to a JQuery DOM object so that I can easily remove all elements that have a certain class using JQuery's .remove(). I.e., var radHtml = editor.get_html(); var jqDom = $(radHtml); $(".thickbox", jqDom).remove(); $(".thickboxcontent", jqDom).remove(); editor.set_html(this.innerHTML); NOTE: The HTML is derived from content in a RADEditor text editor so there are no parent HTML tags, so can look as follows: <p>This is a header</p> <p>this is some content followed by a table </p> <a href="#TB_inline?height=350&amp;width=400&amp;inlineId=myOnPageContent0" class="thickbox">Test Thickbox</a> <div id="myOnPageContent0" class="thickboxcontent"> <table class="modal"> <thead> </thead> <tbody> <tr> <td>item</td> <td>result</td> </tr> <tr> <td>item 1</td> <td>1</td> </tr> <tr> <td>item 2</td> <td>2</td> </tr> <tr> <td>item 3</td> <td>3</td> </tr> </tbody> </table> </div> Here is what the jqDom.html() returns from the HTML above: "This is a header" I was wondering if there was an easy way to do this - have some html and remove all elements (in this case divs) that have a certain class (but leaving their contents). JQuery doesnt have to used, but I would like to. Manipulating the DOM object is fine - it is getting the full DOM object in its entirety as a string that I am having the problem with. Any help would be much appreicated. Thanks.

    Read the article

  • Creating a tiled map with blender

    - by JamesB
    I'm looking at creating map tiles based on a 3D model made in blender, The map is 16 x 16 in blender. I've got 4 different zoom levels and each tile is 100 x 100 pixels. The entire map at the most zoomed out level is 4 x 4 tiles constructing an image of 400 x 400. The most zoomed in level is 256 x 256 obviously constructing an image of 25600 x 25600 What I need is a script for blender that can create the tiles from the model. I've never written in python before so I've been trying to adapt a couple of the scripts which are already there. So far I've come up with a script, but it doesn't work very well. I'm having real difficulties getting the tiles to line up seamlessly. I'm not too concerned about changing the height of the camera as I can always create the same zoomed out tiles at 6400 x 6400 images and split the resulting images into the correct tiles. Here is what I've got so far... #!BPY """ Name: 'Export Map Tiles' Blender: '242' Group: 'Export' Tip: 'Export to Map' """ import Blender from Blender import Scene,sys from Blender.Scene import Render def init(): thumbsize = 200 CameraHeight = 4.4 YStart = -8 YMove = 4 XStart = -8 XMove = 4 ZoomLevel = 1 Path = "/Images/Map/" Blender.drawmap = [thumbsize,CameraHeight,YStart,YMove,XStart,XMove,ZoomLevel,Path] def show_prefs(): buttonthumbsize = Blender.Draw.Create(Blender.drawmap[0]); buttonCameraHeight = Blender.Draw.Create(Blender.drawmap[1]) buttonYStart = Blender.Draw.Create(Blender.drawmap[2]) buttonYMove = Blender.Draw.Create(Blender.drawmap[3]) buttonXStart = Blender.Draw.Create(Blender.drawmap[4]) buttonXMove = Blender.Draw.Create(Blender.drawmap[5]) buttonZoomLevel = Blender.Draw.Create(Blender.drawmap[6]) buttonPath = Blender.Draw.Create(Blender.drawmap[7]) block = [] block.append(("Image Size", buttonthumbsize, 0, 500)) block.append(("Camera Height", buttonCameraHeight, -0, 10)) block.append(("Y Start", buttonYStart, -10, 10)) block.append(("Y Move", buttonYMove, 0, 5)) block.append(("X Start", buttonXStart,-10, 10)) block.append(("X Move", buttonXMove, 0, 5)) block.append(("Zoom Level", buttonZoomLevel, 1, 10)) block.append(("Export Path", buttonPath,0,200,"The Path to save the tiles")) retval = Blender.Draw.PupBlock("Draw Map: Preferences" , block) if retval: Blender.drawmap[0] = buttonthumbsize.val Blender.drawmap[1] = buttonCameraHeight.val Blender.drawmap[2] = buttonYStart.val Blender.drawmap[3] = buttonYMove.val Blender.drawmap[4] = buttonXStart.val Blender.drawmap[5] = buttonXMove.val Blender.drawmap[6] = buttonZoomLevel.val Blender.drawmap[7] = buttonPath.val Export() def Export(): scn = Scene.GetCurrent() context = scn.getRenderingContext() def cutStr(str): #cut off path leaving name c = str.find("\\") while c != -1: c = c + 1 str = str[c:] c = str.find("\\") str = str[:-6] return str #variables from gui: thumbsize,CameraHeight,YStart,YMove,XStart,XMove,ZoomLevel,Path = Blender.drawmap XMove = XMove / ZoomLevel YMove = YMove / ZoomLevel Camera = Scene.GetCurrent().getCurrentCamera() Camera.LocZ = CameraHeight / ZoomLevel YStart = YStart + (YMove / 2) XStart = XStart + (XMove / 2) #Point it straight down Camera.RotX = 0 Camera.RotY = 0 Camera.RotZ = 0 TileCount = 4**ZoomLevel #Because the first thing we do is move the camera, start it off the map Camera.LocY = YStart - YMove for i in range(0,TileCount): Camera.LocY = Camera.LocY + YMove Camera.LocX = XStart - XMove for j in range(0,TileCount): Camera.LocX = Camera.LocX + XMove Render.EnableDispWin() context.extensions = True context.renderPath = Path #setting thumbsize context.imageSizeX(thumbsize) context.imageSizeY(thumbsize) #could be put into a gui. context.imageType = Render.PNG context.enableOversampling(0) #render context.render() #save image ZasString = '%s' %(int(ZoomLevel)) XasString = '%s' %(int(j+1)) YasString = '%s' %(int((3-i)+1)) context.saveRenderedImage("Z" + ZasString + "X" + XasString + "Y" + YasString) #close the windows Render.CloseRenderWindow() try: type(Blender.drawmap) except: #print 'initialize extern variables' init() show_prefs()

    Read the article

  • XSLT Document function returns empty result on Maven POM

    - by user328618
    Greetings! I want to extract some properties from different Maven POMs in a XSLT via the document function. The script itself works fine but the document function returns an empty result for the POM as long as I have the xmlns="http://maven.apache.org/POM/4.0.0" in the project tag. If I remove it, everything works fine. Any idea how the make this work while leaving the xmlns attribute where it belongs or why this doesn't work with the attribute in place? Here comes the relevant portion of my XSLT: <xsl:template match="abcs"> <xsl:variable name="artifactCoordinate" select="abc"/> <xsl:choose> <xsl:when test="document(concat($artifactCoordinate,'-pom.xml'))"> <abc> <ID><xsl:value-of select="$artifactCoordinate"/></ID> <xsl:copy-of select="document(concat($artifactCoordinate,'-pom.xml'))/project/properties"/> </abc> </xsl:when> <xsl:otherwise> <xsl:message terminate="yes"> Transformation failed: POM "<xsl:value-of select="concat($artifactCoordinate,'-pom.xml')"/>" doesn't exist. </xsl:message> </xsl:otherwise> </xsl:choose> And, for completeness, a POM extract with the "bad" attribute: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <!-- ... --> <properties> <proalpha.version>[5.2a]</proalpha.version> <proalpha.openedge.version>[10.1B]</proalpha.openedge.version> <proalpha.optimierer.version>[1.1]</proalpha.optimierer.version> <proalpha.sonic.version>[7.6.1]</proalpha.sonic.version> </properties> </project>

    Read the article

  • How to produce precisely-timed tone and silence in C#

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution. Thanks in advance...

    Read the article

  • PyGTK: dynamic label wrapping

    - by detly
    It's a known bug/issue that a label in GTK will not dynamically resize when the parent changes. It's one of those really annoying small details, and I want to hack around it if possible. I followed the approach at 16 software, but as per the disclaimer you cannot then resize it smaller. So I attempted a trick mentioned in one of the comments (the set_size_request call in the signal callback), but this results in some sort of infinite loop (try it and see). Does anyone have any other ideas? (You can't block the signal just for the duration of the call, since as the print statements seem to indicate, the problem starts after the function is left.) The code is below. You can see what I mean if you run it and try to resize the window larger and then smaller. (If you want to see the original problem, comment out the line after "Connect to the size-allocate signal", run it, and resize the window bigger.) The Glade file ("example.glade"): <?xml version="1.0"?> <glade-interface> <!-- interface-requires gtk+ 2.16 --> <!-- interface-naming-policy project-wide --> <widget class="GtkWindow" id="window1"> <property name="visible">True</property> <signal name="destroy" handler="on_destroy"/> <child> <widget class="GtkLabel" id="label1"> <property name="visible">True</property> <property name="label" translatable="yes">In publishing and graphic design, lorem ipsum[p][1][2] is the name given to commonly used placeholder text (filler text) to demonstrate the graphic elements of a document or visual presentation, such as font, typography, and layout. The lorem ipsum text, which is typically a nonsensical list of semi-Latin words, is a hacked version of a Latin text by Cicero, with words/letters omitted and others inserted, but not proper Latin[1][2] (see below: History and discovery). The closest English translation would be "pain itself" (dolorem = pain, grief, misery, suffering; ipsum = itself).</property> <property name="wrap">True</property> </widget> </child> </widget> </glade-interface> The Python code: #!/usr/bin/python import pygtk import gobject import gtk.glade def wrapped_label_hack(gtklabel, allocation): print "In wrapped_label_hack" gtklabel.set_size_request(allocation.width, -1) # If you uncomment this, we get INFINITE LOOPING! # gtklabel.set_size_request(-1, -1) print "Leaving wrapped_label_hack" class ExampleGTK: def __init__(self, filename): self.tree = gtk.glade.XML(filename, "window1", "Example") self.id = "window1" self.tree.signal_autoconnect(self) # Connect to the size-allocate signal self.get_widget("label1").connect("size-allocate", wrapped_label_hack) def on_destroy(self, widget): self.close() def get_widget(self, id): return self.tree.get_widget(id) def close(self): window = self.get_widget(self.id) if window is not None: window.destroy() gtk.main_quit() if __name__ == "__main__": window = ExampleGTK("example.glade") gtk.main()

    Read the article

  • Java Nimbus LAF with transparent text fields

    - by Software Monkey
    I have an application that uses disabled JTextFields in several places which are intended to be transparent - allowing the background to show through instead of the text field's normal background. When running the new Nimbus LAF these fields are opaque (despite setting setOpaque(false)), and my UI is broken. It's as if the LAF is ignoring the opaque property. Setting a background color explicitly is both difficult in several places, and less than optimal due to background images actually doesn't work - it still paints it's LAF default background over the top, leaving a border-like appearance (the splash screen below has the background explicitly set to match the image). Any ideas on how I can get Nimbus to not paint the background for a JTextField? Note: I need a JTextField, rather than a JLabel, because I need the thread-safe setText(), and wrapping capability. Note: My fallback position is to continue using the system LAF, but Nimbus does look substantially better. See example images below. Conclusions The surprise at this behavior is due to a misinterpretation of what setOpaque() is meant to do - from the Nimbus bug report: This is a problem the the orginal design of Swing and how it has been confusing for years. The issue is setOpaque(false) has had a side effect in exiting LAFs which is that of hiding the background which is not really what it is ment for. It is ment to say that the component my have transparent parts and swing should paint the parent component behind it. It's unfortunate that the Nimbus components also appear not to honor setBackground(null) which would otherwise be the recommended way to stop the background painting. Setting a fully transparent background seems unintuitive to me. In my opinion, setOpaque()/isOpaque() is a faulty public API choice which should have been only: public boolean isFullyOpaque(); I say this, because isOpaque()==true is a contract with Swing that the component subclass will take responsibility for painting it's entire background - which means the parent can skip painting that region if it wants (which is an important performance enhancement). Something external cannot directly change this contract (legitimately), whose fulfillment may be coded into the component. So the opacity of the component should not have been settable using setOpaque(). Instead something like setBackground(null) should cause many components to "no long have a background" and therefore become not fully opaque. By way of example, in an ideal world most components should have an isOpaque() that looks like this: public boolean isOpaque() { return (background!=null); }

    Read the article

  • Seg Fault when using std::string on an embedded Linux platform

    - by Brad
    Hi, I have been working for a couple of days on a problem with my application running on an embedded Arm Linux platform. Unfortunately the platform precludes me from using any of the usual useful tools for finding the exact issue. When the same code is run on the PC running Linux, I get no such error. In the sample below, I can reliably reproduce the problem by uncommenting the string, list or vector lines. Leaving them commented results in the application running to completion. I expect that something is corrupting the heap, but I cannot see what? The program will run for a few seconds before giving a segmentation fault. The code is compiled using a arm-linux cross compiler: arm-linux-g++ -Wall -otest fault.cpp -ldl -lpthread arm-linux-strip test Any ideas greatly appreciated. #include <stdio.h> #include <vector> #include <list> #include <string> using namespace std; ///////////////////////////////////////////////////////////////////////////// class TestSeg { static pthread_mutex_t _logLock; public: TestSeg() { } ~TestSeg() { } static void* TestThread( void *arg ) { int i = 0; while ( i++ < 10000 ) { printf( "%d\n", i ); WriteBad( "Function" ); } pthread_exit( NULL ); } static void WriteBad( const char* sFunction ) { pthread_mutex_lock( &_logLock ); printf( "%s\n", sFunction ); //string sKiller; // <----------------------------------Bad //list<char> killer; // <----------------------------------Bad //vector<char> killer; // <----------------------------------Bad pthread_mutex_unlock( &_logLock ); return; } void RunTest() { int threads = 100; pthread_t _rx_thread[threads]; for ( int i = 0 ; i < threads ; i++ ) { pthread_create( &_rx_thread[i], NULL, TestThread, NULL ); } for ( int i = 0 ; i < threads ; i++ ) { pthread_join( _rx_thread[i], NULL ); } } }; pthread_mutex_t TestSeg::_logLock = PTHREAD_MUTEX_INITIALIZER; int main( int argc, char *argv[] ) { TestSeg seg; seg.RunTest(); pthread_exit( NULL ); }

    Read the article

  • Datagrid using usercontrol

    - by klawusel
    Hello I am fighting with this problem: I have a usercontrol which contains a textbox and a button (the button calls some functions to set the textbox's text), here is the xaml: <UserControl x:Class="UcSelect" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="Control1Name" <Grid x:Name="grid1" MaxHeight="25"> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition Width="25"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="25"/> </Grid.RowDefinitions> <TextBox x:Name="txSelect" Text="{Binding UcText, Mode=TwoWay}" /> <Button x:Name="pbSelect" Background="Red" Grid.Column="1" Click="pbSelect_Click">...</Button> </Grid> And here the code behind: Partial Public Class UcSelect Private Shared Sub textChangedCallBack(ByVal [property] As DependencyObject, ByVal args As DependencyPropertyChangedEventArgs) Dim UcSelectBox As UcSelect = DirectCast([property], UcSelect) End Sub Public Property UcText() As String Get Return GetValue(UcTextProperty) End Get Set(ByVal value As String) SetValue(UcTextProperty, value) End Set End Property Public Shared ReadOnly UcTextProperty As DependencyProperty = _ DependencyProperty.Register("UcText", _ GetType(String), GetType(UcSelect), _ New FrameworkPropertyMetadata(String.Empty, FrameworkPropertyMetadataOptions.BindsTwoWayByDefault, New PropertyChangedCallback(AddressOf textChangedCallBack))) Public Sub New() InitializeComponent() grid1.DataContext = Me End Sub Private Sub pbSelect_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) 'just demo UcText = UcText + "!" End Sub End Class The UserControl works fine when used as a single control in this way: <local:UcSelect Grid.Row="1" x:Name="ucSingle1" UcText="{Binding FirstName, Mode=TwoWay}"/> Now I wanted to use the control in a custom datagrid column. As I like to have binding support I choosed to derive from DataGridtextColumn instead of using a DataGridTemplateColumn, here is the derived column class: Public Class DerivedColumn Inherits DataGridTextColumn Protected Overloads Overrides Function GenerateElement(ByVal oCell As DataGridCell, ByVal oDataItem As Object) As FrameworkElement Dim oElement = MyBase.GenerateElement(oCell, oDataItem) Return oElement End Function Protected Overloads Overrides Function GenerateEditingElement(ByVal oCell As DataGridCell, ByVal oDataItem As Object) As FrameworkElement Dim oUc As New UcSelect Dim oBinding As Binding = CType(Me.Binding, Binding) oUc.SetBinding(UcSelect.UcTextProperty, oBinding) Return oUc End Function End Class The column is used in xaml in the following way: <local:DerivedColumn Header="Usercontrol" Binding="{Binding FirstName, Mode=TwoWay}"></local:DerivedColumn> If I start my program all seems to be fine, but changes I make in the custom column are not reflected in the object (property "FirstName"), the changes are simply rolled back when leaving the cell. I think there must be something wrong with my GenerateEditingElement code, but have no idea ... Any Help would really be appreciated Regards Klaus

    Read the article

  • Facing Memory Leaks in AES Encryption Method.

    - by Mubashar Ahmad
    Can anyone please identify is there any possible memory leaks in following code. I have tried with .Net Memory Profiler and it says "CreateEncryptor" and some other functions are leaving unmanaged memory leaks as I have confirmed this using Performance Monitors. but there are already dispose, clear, close calls are placed wherever possible please advise me accordingly. its a been urgent. public static string Encrypt(string plainText, string key) { //Set up the encryption objects byte[] encryptedBytes = null; using (AesCryptoServiceProvider acsp = GetProvider(Encoding.UTF8.GetBytes(key))) { byte[] sourceBytes = Encoding.UTF8.GetBytes(plainText); using (ICryptoTransform ictE = acsp.CreateEncryptor()) { //Set up stream to contain the encryption using (MemoryStream msS = new MemoryStream()) { //Perform the encrpytion, storing output into the stream using (CryptoStream csS = new CryptoStream(msS, ictE, CryptoStreamMode.Write)) { csS.Write(sourceBytes, 0, sourceBytes.Length); csS.FlushFinalBlock(); //sourceBytes are now encrypted as an array of secure bytes encryptedBytes = msS.ToArray(); //.ToArray() is important, don't mess with the buffer csS.Close(); } msS.Close(); } } acsp.Clear(); } //return the encrypted bytes as a BASE64 encoded string return Convert.ToBase64String(encryptedBytes); } private static AesCryptoServiceProvider GetProvider(byte[] key) { AesCryptoServiceProvider result = new AesCryptoServiceProvider(); result.BlockSize = 128; result.KeySize = 256; result.Mode = CipherMode.CBC; result.Padding = PaddingMode.PKCS7; result.GenerateIV(); result.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] RealKey = GetKey(key, result); result.Key = RealKey; // result.IV = RealKey; return result; } private static byte[] GetKey(byte[] suggestedKey, SymmetricAlgorithm p) { byte[] kRaw = suggestedKey; List<byte> kList = new List<byte>(); for (int i = 0; i < p.LegalKeySizes[0].MaxSize; i += 8) { kList.Add(kRaw[(i / 8) % kRaw.Length]); } byte[] k = kList.ToArray(); return k; }

    Read the article

  • How do I ADD an Attribute to the Root Element in XML using XSLT?

    - by kunjaan
    I want to match a root Element “FOO” and perform the transformation (add a version attribute) to it leaving the rest as it is. The Transformation I have so far looks like this: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://schemas.foo.com/fooNameSpace"> <xsl:template match="//FOO"> <xsl:choose> <xsl:when test="@version"> <xsl:apply-templates select="node()|@*" /> </xsl:when> <xsl:otherwise> <FOO> <xsl:attribute name="version">1</xsl:attribute> <xsl:apply-templates select="node()|@*" /> </FOO> </xsl:otherwise> </xsl:choose> </xsl:template> However this does not perform any transformation. It doesn't even detect the element. So I need to do add the namespace in order to make it work: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fd="http://schemas.foo.com/fooNameSpace"> <xsl:template match="//fd:FOO"> … But this attaches a namespace attribute to the FOO element as well as other elements: <FOO xmlns:fd="http://schemas.foo.com/fooNameSpace" version="1" id="fooid"> <BAR xmlns="http://schemas.foo.com/fooNameSpace" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> Is there a way to say that the element is using the default namespace? Can we match and add elements in the default name space? Here is the original XML: <?xml version="1.0" encoding="UTF-8"?> <FOO xmlns="http://schemas.foo.com/fooNameSpace" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <BAR> <Attribute name="HEIGHT">2067</Attribute> </BAR> </FOO>

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • JQGrid and JQuery Autocomplete

    - by Neff
    When implementing JQGrid 4.3.0, Jquery 1.6.2, and JQuery UI 1.8.16 Ive come across an issue with the Inline edit. When the inline edit is activated, some of the elements get assigned an auto complete. When the inline edit is canceld or saved, the auto complete does not always go away (selecting text by double clicking it then hitting delete, then hitting escape to exit row edit). Leaving the auto complete controls in edit mode when the row is no longer considered in edit mode. Perhaps you can tell me if there is a problem with the initialization or if I you are aware of an event post-"afterrestorefunc" that the fields can be returned to their "original" state. Original state being displayed as data in the JQGrid row. I've tried removing the DOM after row close, .remove() and .empty(): ... "afterrestorefunc": function(){ $('.ui-autocomplete-input').remove(); } ... but that causes other issues, such as the jqgrid is not able to find the cell when serializing the row for data or edit, and requires a refresh of the page, not just jqgrid, to be able to once again see the data from that row. Auto complete functionality for the element is created on the double click of the row: function CreateCustomSearchElement(value, options, selectiontype) { ... var el; el = document.createElement("input"); ... $(el).autocomplete({ source: function (request, response) { $.ajax({ url: '<%=ResolveUrl("~/Services/AutoCompleteService.asmx/GetAutoCompleteResponse") %>', data: "{ 'prefixText': '" + request.term + "', 'contextKey': '" + options.name + "'}", dataType: "json", type: "POST", contentType: "application/json; charset=utf-8", success: function (data) { response($.map(data.d, function (item) { return { label: Trim(item), value: Trim(item), searchVal: Trim(item) } })) } }); }, select: function (e, item) { //Select is on the event of selection where the value and label have already been determined. }, minLength: 1, change: function (event, ui) { //if the active element was not the search button //... } }).keyup(function (e) { if (e.keyCode == 8 || e.keyCode == 46) { //If the user hits backspace or delete, check the value of the textbox before setting the searchValue //... } }).keydown(function (e) { //if keycode is enter key and there is a value, you need to validate the data through select or change(onblur) if (e.keyCode == '13' && ($(el).val())) { return false; } if (e.keyCode == '220') { return false } }); } Other Sources: http://www.trirand.com/jqgridwiki/doku.php?id=wiki:inline_editing http://api.jqueryui.com/autocomplete/ Update: I tried only creating the autocomplete when the element was focused, and removing it when onblur. That did not resolve the issue either. It seems to just need the autocomplete dropdown to be triggered.

    Read the article

  • Are "EXC_BREAKPOINT (SIGTRAP)" exceptions caused by debugging breakpoints?

    - by Dennis
    I have a multithreaded app that is very stable on all my test machines and seems to be stable for almost every one of my users (based on no complaints of crashes). The app crashes frequently for one user, though, who was kind enough to send crash reports. All the crash reports (~10 consecutive reports) look essentially identical: Date/Time: 2010-04-06 11:44:56.106 -0700 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x90ab98d4 __CFBasicHashRehash + 3348 1 com.apple.CoreFoundation 0x90adf610 CFBasicHashRemoveValue + 1264 2 com.apple.CoreText 0x94e0069c TCFMutableSet::Intersect(__CFSet const*) const + 126 3 com.apple.CoreText 0x94dfe465 TDescriptorSource::CopyMandatoryMatchableRequest(__CFDictionary const*, __CFSet const*) + 115 4 com.apple.CoreText 0x94dfdda6 TDescriptorSource::CopyDescriptorsForRequest(__CFDictionary const*, __CFSet const*, long (*)(void const*, void const*, void*), void*, unsigned long) const + 40 5 com.apple.CoreText 0x94e00377 TDescriptor::CreateMatchingDescriptors(__CFSet const*, unsigned long) const + 135 6 com.apple.AppKit 0x961f5952 __NSFontFactoryWithName + 904 7 com.apple.AppKit 0x961f54f0 +[NSFont fontWithName:size:] + 39 (....more text follows) First, I spent a long time investigating [NSFont fontWithName:size:]. I figured that maybe the user's fonts were screwed up somehow, so that [NSFont fontWithName:size:] was requesting something non-existent and failing for that reason. I added a bunch of code using [[NSFontManager sharedFontManager] availableFontNamesWithTraits:NSItalicFontMask] to check for font availability in advance. Sadly, these changes didn't fix the problem. I've now noticed that I forgot to remove some debugging breakpoints, including _NSLockError, [NSException raise], and objc_exception_throw. However, the app was definitely built using "Release" as the active build configuration. I assume that using the "Release" configuration prevents setting of any breakpoints--but then again I am not sure exactly how breakpoints work or whether the program needs to be run from within gdb for breakpoints to have any effect. My questions are: could my having left the breakpoints set be the cause of the crashes observed by the user? If so, why would the breakpoints cause a problem only for this one user? If not, has anybody else had similar problems with [NSFont fontWithName:size:]? I will probably just try removing the breakpoints and sending back to the user, but I'm not sure how much currency I have left with that user. And I'd like to understand more generally whether leaving the breakpoints set could possibly cause a problem (when the app is built using "Release" configuration).

    Read the article

  • .NET Application broken on one PC, unhandleable exception

    - by Bobby
    Hello people. I have a .NET 2.0 application with nothing fancy in it. It worked until yesterday on every PC I installed or copied it to, no matter if 2.0, 3.0, 3.5 or 3.5 SP1 was installed, no matter if it was Win2000, XP or even Win7 (in total 100+ machines). Yesterday I did my normal installation procedure and wanted to start it one time to check if everything is working...and it wasn't. The program crashed hard leaving me with the uninformative "Do you wanna report this error?" dialog. The problem is an exception in the Main(String[] args) routine of my application. The event viewer is showing the following entry: Event Type: ErrorEvent Source: .NET Runtime 2.0 Error Reporting Event Category: None Event ID: 5000 Date: 05/05/2010 Time: 16:09:09 User: N/A Computer: myClientPC Description: EventType clr20r3, P1 apomenu.exe, P2 1.4.90.53, P3 4bdedea4, P4 system.configuration, P5 2.0.0.0, P6 4889de74, P7 1a6, P8 136, P9 ioibmurhynrxkw0zxkyrvfn0boyyufow, P10 NIL. Well...great information. After a lot of searching I finally was able to get further information about this exception (by adding a handler for UnhandledExceptions directly in My.MyApplication.New(), Application.Designer.vb): System.Configuration.ConfigurationErrorsException Configuration system failed to initialize at System.Configuration.ClientConfigurationSystem.EnsureInit(String configKey) at System.Configuration.ClientConfigurationSystem.PrepareClientConfigSystem(String sectionName) at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection(String sectionName) at System.Configuration.ConfigurationManager.GetSection(String sectionName) at System.Configuration.PrivilegedConfigurationManager.GetSection(String sectionName) at System.Net.Configuration.SettingsSectionInternal.get_Section() at System.Net.Sockets.Socket.InitializeSockets() at System.Net.Sockets.Socket.get_SupportsIPv4() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.get_HostName() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.RegisterChannel(Boolean SecureChannel) at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine) at MyAppNameHere.My.MyApplication.Main(String[] Args) in 17d14f5c-a337-4978-8281-53493378c1071.vb:Line 81. And at this point I'm stuck...I'm out of ideas. I'm not using any kind of configuration system from the framework (no reference to System.Configuration, and there never was a MyAppnameHere.exe.config generated or distributed, nor have I seen this error before). I also found a bug report at Microsoft (Google Cache) about this bug (in another context, though). But as it seems, they won't even look at it. Every help is greatly appreciated! Edit: I'm using Visual Studio 2008 Prof.. Crash happens in Release- and Debug-Build on the client machine. Debugging the application directly on this machine is out of question I fear, 300+ Miles and they only have two computers to work with.

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • seperating interface and implemention with normal functions

    - by ace
    this seems like it should be pretty simple, im probably leaving something simple out. this is the code im trying to run. it is 3 files, 2*cpp and 1*header. -------------lab6.h ifndef LAB6_H_INCLUDED define LAB6_H_INCLUDED int const arraySize = 10; int array1[arraySize]; int array2[arraySize]; void generateArray(int[], int ); void displayArray(int[], int[], int ); void reverseOrder(int [],int [], int); endif // LAB6_H_INCLUDED -----------------lab6.cpp include using std::cout; using std::endl; include using std::rand; using std::srand; include using std::time; include using std::setw; include "lab6.h" void generateArray(int array1[], int arraySize) { srand(time(0)); for (int i=0; i<10; i++) { array1[i]=(rand()%10); } } void displayArray(int array1[], int array2[], int arraySize) { cout<<endl<<"Array 1"<<endl; for (int i=0; i<arraySize; i++) { cout<<array1[i]<<", "; } cout<<endl<<"Array 2"<<endl; for (int i=0; i<arraySize; i++) { cout<<array2[i]<<", "; } } void reverseOrder(int array1[],int array2[], int arraySize) { for (int i=0, j=arraySize-1; i<arraySize;j--, i++) { array2[j] = array1[i]; } } ------------and finally main.cpp include "lab6.h" int main() { generateArray(array1, arraySize); reverseOrder(array1, array2, arraySize); displayArray(array1, array2, arraySize); return 0; }

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • How do I prevent programmatically the "Program Compatibility Assistant" in Vista (and Windows 7) fro

    - by Asaf
    I develop a C++ program which might use adobe flash, although it is not essential. I use CoCreateInstance to create the flash object, and if it fails, I know flash is not installed so I don't use it. However, in Vista (and I think Windows 7 as well), when flash is not installed, after leaving the application, the "Program Compatibility Assistant" pops up a message saying that "This program requires a missing Windows component" specifying the flash.ocx. Is there a way to prevent this message from appearing? I don't want to force any user to install flash (especially since it's the IE ActiveX, and FireFox users might not have it installed), and my application can operate well without the flash. Plus this message is really annoying when it appears after every run. I don't mean of course disabling the PCA on the user's machine, but programmatically disable this specific appearance on all machines. Any thoughts? Thanks [EDIT:] I followed Shay's lead (thanks), and did some more digging of my own. I added the following XML to the application's manifest: <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"> </requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> (see also: msdn.microsoft.com/en-us/library/bb756929.aspx) This solved the problem on Vista 64. To solve the same problem on Windows 7, I added the following: <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1"> <application> <!--The ID below indicates application support for Windows Vista --> <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/> <!--The ID below indicates application support for Windows 7 --> <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/> </application> </compatibility> (See also: blogs.msdn.com/yvesdolc/archive/2009/09/22/the-new-compatibility-section-in-the-application-manifest.aspx) Solved Windows 7. But for some reason, it still happens in Vista 32... I also tried editing the manifest of the specific DLL which causes the problem, but it had no effect. Only the executable's manifest itself affected the problem. So... Vista 32?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >