Search Results

Search found 1781 results on 72 pages for 'magic c0d3r'.

Page 62/72 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Routing audio to Bluetooth Headset (non-A2DP) on Android

    - by Jayesh
    I have a non-A2DP single ear BT headset (Plantronics 510) and would like to use it with my Android HTC Magic to listen to low quality audio like podcasts/audio books. After much googling I found that only phone call audio can be routed to the non-A2DP BT headsets. (I would like to know if you have found a ready solution to route all kinds of audio to non-A2DP BT headsets) So I figured, somehow programmatically I can channel the audio to the stream that carries phone call audio. This way I will fool the phone to carry my mp3 audio to my BT headset. I wrote following simple code. import android.content.*; import android.app.Activity; import android.os.Bundle; import android.media.*; import java.io.*; import android.util.Log; public class BTAudioActivity extends Activity { private static final String TAG = "BTAudioActivity"; private MediaPlayer mPlayer = null; private AudioManager amanager = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); amanager = (AudioManager) getSystemService(Context.AUDIO_SERVICE); amanager.setBluetoothScoOn(true); amanager.setMode(AudioManager.MODE_IN_CALL); mPlayer = new MediaPlayer(); try { mPlayer.setDataSource(new FileInputStream( "/sdcard/sample.mp3").getFD()); mPlayer.setAudioStreamType(AudioManager.STREAM_VOICE_CALL); mPlayer.prepare(); mPlayer.start(); } catch(Exception e) { Log.e(TAG, e.toString()); } } @Override public void onDestroy() { mPlayer.stop(); amanager.setMode(AudioManager.MODE_NORMAL); amanager.setBluetoothScoOn(false); super.onDestroy(); } } As you can see I tried combinations of various methods that I thought will fool the phone to believe my audio is a phone call: Using MediaPlayer's setAudioStreamType(STREAM_VOICE_CALL) using AudioManager's setBluetoothScoOn(true) using AudioManager's setMode(MODE_IN_CALL) But none of the above worked. If I remove the AudioManager calls in the above code, the audio plays from speaker and if I replace them as shown above then the audio stops coming from speakers, but it doesn't come through the BT headset. So this might be a partial success. I have checked that the BT headset works alright with phone calls. There must be a reason for Android not supporting this. But I can't let go of the feeling that it is not possible to programmatically reroute the audio. Any ideas? P.S. above code needs following permission <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>

    Read the article

  • Drawing a TextBox in an extended Glass Frame (C# w/o WPF)

    - by Lazlo
    I am trying to draw a TextBox on the extended glass frame of my form. I won't describe this technique, it's well-known. Here's an example for those who haven't heard of it: http://www.danielmoth.com/Blog/Vista-Glass-In-C.aspx The thing is, it is complex to draw over this glass frame. Since black is considered to be the 0-alpha color, anything black disappears. There are apparently ways of countering this problem: drawing complex GDI+ shapes are not affected by this alpha-ness. For example, this code can be used to draw a Label on glass (note: GraphicsPath is used instead of DrawString in order to get around the horrible ClearType problem): public class GlassLabel : Control { public GlassLabel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { GraphicsPath font = new GraphicsPath(); font.AddString( this.Text, this.Font.FontFamily, (int)this.Font.Style, this.Font.Size, Point.Empty, StringFormat.GenericDefault); e.Graphics.SmoothingMode = SmoothingMode.HighQuality; e.Graphics.FillPath(new SolidBrush(this.ForeColor), font); } } Similarly, such an approach can be used to create a container on the glass area. Note the use of the polygons instead of the rectangle - when using the rectangle, its black parts are considered as alpha. public class GlassPanel : Panel { public GlassPanel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { Point[] area = new Point[] { new Point(0, 1), new Point(1, 0), new Point(this.Width - 2, 0), new Point(this.Width - 1, 1), new Point(this.Width -1, this.Height - 2), new Point(this.Width -2, this.Height-1), new Point(1, this.Height -1), new Point(0, this.Height - 2) }; Point[] inArea = new Point[] { new Point(1, 1), new Point(this.Width - 1, 1), new Point(this.Width - 1, this.Height - 1), new Point(this.Width - 1, this.Height - 1), new Point(1, this.Height - 1) }; e.Graphics.FillPolygon(new SolidBrush(Color.FromArgb(240, 240, 240)), inArea); e.Graphics.DrawPolygon(new Pen(Color.FromArgb(55, 0, 0, 0)), area); base.OnPaint(e); } } Now my problem is: How can I draw a TextBox? After lots of Googling, I came up with the following solutions: Subclassing the TextBox's OnPaint method. This is possible, although I could not get it to work properly. It should involve painting some magic things I don't know how to do yet. Making my own custom TextBox, perhaps on a TextBoxBase. If anyone has good, valid and working examples, and thinks this could be a good overall solution, please tell me. Using BufferedPaintSetAlpha. (http://msdn.microsoft.com/en-us/library/ms649805.aspx). The downsides of this method may be that the corners of the textbox might look odd, but I can live with that. If anyone knows how to implement that method properly from a Graphics object, please tell me. I personally don't, but this seems the best solution so far. Thanks!

    Read the article

  • [R] Merge multiple data frames - Error in match.names(clabs, names(xi)) : names do not match previou

    - by Jasmine
    Hi all- I'm getting some really bizarre stuff while trying to merge multiple data frames. Help! I need to merge a bunch of data frames by the columns 'RID' and 'VISCODE'. Here is an example of what it looks like: d1 = data.frame(ID = sample(9, 1:100), RID = c(2, 5, 7, 9, 12), VISCODE = rep('bl', 5), value1 = rep(16, 5)) d2 = data.frame(ID = sample(9, 1:100), RID = c(2, 2, 2, 5, 5, 5, 7, 7, 7), VISCODE = rep(c('bl', 'm06', 'm12'), 3), value2 = rep(100, 9)) d3 = data.frame(ID = sample(9, 1:100), RID = c(2, 2, 2, 5, 5, 5, 9,9,9), VISCODE = rep(c('bl', 'm06', 'm12'), 3), value3 = rep("a", 9), values3.5 = rep("c", 9)) d4 = data.frame(ID =sample(8, 1:100), RID = c(2, 2, 5, 5, 5, 7, 7, 7, 9), VISCODE = c(c('bl', 'm12'), rep(c('bl', 'm06', 'm12'), 2), 'bl'), value4 = rep("b", 9)) dataList = list(d1, d2, d3, d4) I looked at the answers to the question titled "Merge several data.frames into one data.frame with a loop." I used the reduce method suggested there as well as a loop I wrote: try1 = mymerge(dataList) try2 <- Reduce(function(x, y) merge(x, y, all= TRUE, by=c("RID", "VISCODE")), dataList, accumulate=F) where dataList is a list of data frames and mymerge is: mymerge = function(dataList){ L = length(dataList) mdat = dataList[[1]] for(i in 2:L){ mdat = merge(mdat, dataList[[i]], by.x = c("RID", "VISCODE"), by.y = c("RID", "VISCODE"), all = TRUE) } mdat } For my test data and subsets of my real data, both of these work fine and produce exactly the same results. However, when I use larger subsets of my data, they both break down and give me the following error: Error in match.names(clabs, names(xi)) : names do not match previous names. The really weird thing is that using this works: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:47, ]) And using this fails: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:48, ]) As far as I can tell, there is nothing special about row 48 of faq. Likewise, using this works: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], pdx[1:47, ]) And using this fails: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], pdx[1:48, ]) Row 48 in faq and row 48 in pdx have the same values for RID and VISCODE, the same value for EXAMDATE (something I'm not matching on) and different values for ID (another thing I'm not matching on). Besides the matching RID and VISCODE, I see anything special about them. They don't share any other variable names. This same scenario occurs elsewhere in the data without problems. To add icing on the complication cake, this doesn't even work: dataList = list(demog[1:50,], neurobat[1:50,], apoe[1:50,], mmse[1:50,], faq[1:48, 2:3]) where columns 2 and 3 are "RID" and "VISCODE". 48 isn't even the magic number because this works: dataList = list(demog[1:500,], neurobat[1:500,], apoe[1:500,], mmse[1:457,]) while using mmse[1:458, ] fails. I can't seem to come up with test data that causes the problem. Has anyone had this problem before? Any better ideas on how to merge? Thanks for your help! Jasmine

    Read the article

  • UIScrollView image/photo viewer with paging enabled and zooming

    - by Mike Weller
    OK, I think it's time to make an official place on the internet for this problem: How to make a UIScrollView photoviewer with paging and zooming. Welcome my fellow UIScrollView hackers. I have a UIScrollView with paging enabled, and I'm displaying UIImageViews like the built-in photos app. (Does this sound familiar yet?) I found the following project on github: http://wiki.github.com/andreyvit/ScrollingMadness Which shows how to implement zooming in a scroll view while paging is enabled. If anyone else tries this out, I actually had to remove the UIScrollView subclass and use the native class otherwise it doesn't work. I think it's because of changes in the 3.0 SDK relating to how the scroll view intercepts touch events. So the the idea is to remove all the other views when you start zooming, and move the current view to (0, 0) in the scrollview, updating the contentsize etc. Then when you zoom back to 1.0f it adds the other views back and puts things all back in order. Anyway, that project works perfectly in the simulator, but on the device there is some nasty movement of the view you are resizing, which looks like it's caused by the fact we are changing the contentsize/offset etc. for the view being resized. You have to do this view moving otherwise you can pan left through the whitespace left by the other views. I found one interesting note in the "Known Issues" of the 3.0 SDK release notes: UIScrollView: After zooming, content inset is ignored and content is left in the wrong position. This kind of sounds like what is happening here. After zooming in, the view will shift offscreen because you have changed the offset etc. I've spent hours on this already and I'm slowing coming to the sad realization that this just isn't going to work. Three20's photo viewer is out of the question: it's too heavy weight and there is too much unnecessary UI and other behaviour. The built in Photo app seems to do some magic. If you zoom in on an image and pan to the far edges, the current photo moves independently of the photo next to it which isn't what you get when trying this with a standard UIScrollView. I've seen discussion about nesting the UIScrollView's but I really don't want to go there. Has anybody managed this with the standard UIScrollView (and works in the 2.2 and 3.0 SDK)? I don't fancy rolling my own zoom + bounce + pan + paging code.

    Read the article

  • Why is ListBoxFor not selecting items, but ListBox is?

    - by Roger Rogers
    I have the following code in my view: <%= Html.ListBoxFor(c => c.Project.Categories, new MultiSelectList(Model.Categories, "Id", "Name", new List<int> { 1, 2 }))%> <%= Html.ListBox("MultiSelectList", new MultiSelectList(Model.Categories, "Id", "Name", new List<int> { 1, 2 }))%> The only difference is that the first helper is strongly typed (ListBoxFor), and it fails to show the selected items (1,2), even though the items appear in the list, etc. The simpler ListBox is working as expected. I'm obviously missing something here. I can use the second approach, but this is really bugging me and I'd like to figure it out. For reference, my model is: public class ProjectEditModel { public Project Project { get; set; } public IEnumerable<Project> Projects { get; set; } public IEnumerable<Client> Clients { get; set; } public IEnumerable<Category> Categories { get; set; } public IEnumerable<Tag> Tags { get; set; } public ProjectSlide SelectedSlide { get; set; } } Update I just changed the ListBox name to Project.Categories (matching my model) and it now FAILS to select the item. <%= Html.ListBox("Project.Categories", new MultiSelectList(Model.Categories, "Id", "Name", new List<int> { 1, 2 }))%> I'm obviously not understanding the magic that is happening here. Update 2 Ok, this is purely naming, for example, this works... <%= Html.ListBox("Project_Tags", new MultiSelectList(Model.Tags, "Id", "Name", Model.Project.Tags.Select(t => t.Id)))%> ...because the field name is Project_Tags, not Project.Tags, in fact, anything other than Tags or Project.Tags will work. I don't get why this would cause a problem (other than that it matches the entity name), and I'm not good enough at this to be able to dig in and find out.

    Read the article

  • MKMapView memory usage grows out of control with setRegion: calls

    - by Kurt
    Hi, I have a single MKMapView instance that I have programmatically added to a UIView. As part of the UI, the user can cycle through a list of addresses and the map view is updated to show the correct map for each address as the user goes through them. I create the map view once, and simply change what it displays with setRegion:animated:. The problem is that each time the map is changed to show a new address, the memory usage of my program increases by 200K-500K (as reported by Memory Monitor in Instruments). According to Object Allocations, it appears that a lot of 1.0K Mallocs are happening each time, and the Extended Detail pane for these 1.0K allocations shows that the Responsible Caller is convert_image_data and the Extended Detail pane shows that this is the result of [MKMapTileView drawLayer:inContext:]. So, seems likely to me that the memory usage is due to MKMapView not freeing memory it uses to redraw the map each time. In fact, when I don't display the map at all (by not even adding it as a subview of my main UIView) but still cycle through the addresses (which changes various UILabels and other displayed info) the memory usage for the app does NOT increase. If I add the map view but never update it with setRegion:, the memory also does NOT increase when changing to a new address. One more bit of info: if I go to a new address (and therefore ask the map to display the new address) the memory jumps as described above. However, if I go back to an address that was already displayed, the memory does not jump when the map redraws with the old address. Also, this happens on iPad (real device) with 3.2 and on iPhone (again, real device) with 3.1.2. Here's how I initialize the MKMapView (I only do this once): CGRect mapFrame; mapFrame.origin.y = 460; // yes, magic numbers. just for testing. mapFrame.origin.x = 0; mapFrame.size.height = 500; mapFrame.size.width = 768; mapView = [[MKMapView alloc] initWithFrame:mapFrame]; mapView.delegate = self; [self.view insertSubview:mapView atIndex:0]; And in response to the user selecting an address, I set the map like so: MKCoordinateRegion region; MKCoordinateSpan span; span.latitudeDelta=kStreetMapSpan; // 0.003 span.longitudeDelta=kStreetMapSpan; // 0.003 region.center = address.coords; // coords is CLLocationCoordinate2D region.span = span; mapView.region.span = span; [mapView setRegion:region animated:NO]; Any thoughts? I've scoured the net but haven't seen mention of this problem, and I've reached the limits of my Instruments knowledge. Thanks for any ideas.

    Read the article

  • CherryPy sessions for same domain, different port

    - by detly
    Consider the script below. It will launch two subprocesses, each one a CherryPy app (hit Ctrl+C or whatever the KeyboardInterrupt combo is on your system to end them both). If you run it with CP 3.0 (taking care to change the 3.0/3.1 specific lines in "StartServer"), then visit: http://localhost:15002/ ...you see an empty dict. Then visit: http://localhost:15002/set?val=10 http://localhost:15002/ ...and you see the newly populated dict. Then visit: http://localhost:15012/ ...and go back to http://localhost:15002/ ...and nothing has changed. If you try the same thing with CP 3.1 (remember the lines in "StartServer"!), when you get to the last step, the dict is now empty. This happens in Windows and Debian, Python 2.5 and 2.6. You can try all sorts of things: changing to file storage, separating the storage paths... the only difference it makes is that the sessions might get merged instead of erased. I've read another post about this as well, and there's a suggestion there to put the session tools config keys in the app config rather than the global config, but I don't think that's relevant to this usage where the apps run independently. What do I do to get independent CherryPy applications to NOT interfere with each other? Note: I originally asked this on the CherryPy mailing list but haven't had a response yet so I'm trying here. I hope that's okay. import os, os.path, socket, sys import subprocess import cgi import cherrypy HTTP_PORT = 15002 HTTP_HOST = "127.0.0.1" site1conf = { 'global' : { 'server.socket_host' : HTTP_HOST, 'server.socket_port' : HTTP_PORT, 'tools.sessions.on' : True, # 'tools.sessions.storage_type': 'file', # 'tools.sessions.storage_path': '1', # 'tools.sessions.storage_path': '.', 'tools.sessions.timeout' : 1440}} site2conf = { 'global' : { 'server.socket_host' : HTTP_HOST, 'server.socket_port' : HTTP_PORT + 10, 'tools.sessions.on' : True, # 'tools.sessions.storage_type': 'file', # 'tools.sessions.storage_path': '2', # 'tools.sessions.storage_path': '.', 'tools.sessions.timeout' : 1440}} class Home(object) : def __init__(self, key): self.key = key @cherrypy.expose def index(self): return """\ <html> <body>Session: <br>%s </body> </html> """ % cgi.escape(str(dict(cherrypy.session))) @cherrypy.expose def set(self, val): cherrypy.session[self.key.upper()] = val return """\ <html> <body>Set %s to %s</body> </html>""" % (cgi.escape(self.key), cgi.escape(val)) def StartServer(conf, key): cherrypy.config.update(conf) print 'Starting server (%s)' % key cherrypy.tree.mount(Home(key), '/', {}) # Start the web server. #### 3.0 # cherrypy.server.quickstart() # cherrypy.engine.start() #### #### 3.1 cherrypy.engine.start() cherrypy.engine.block() #### def Main(): # Start first webserver proc1 = subprocess.Popen( [sys.executable, os.path.abspath(__file__), "1"]) proc2 = subprocess.Popen( [sys.executable, os.path.abspath(__file__), "2"]) proc1.wait() proc2.wait() if __name__ == "__main__": print sys.argv if len(sys.argv) == 1: # Master process Main() elif(int(sys.argv[1]) == 1): StartServer(site1conf, 'magic') elif(int(sys.argv[1]) == 2): StartServer(site2conf, 'science') else: sys.exit(1)

    Read the article

  • OpenGL FrameBuffer Objects weird behavior

    - by Ben Jones
    My algorithm is this: Render the scene to a FBO with shadow mapping from multiple locations Render the scene to the screen with shadow mapping ...black magic that I still have to imlement... Combine the samples from step 1 with the image from step 2 I'm trying to debug steps 1 and 2 and am coming across STRANGE behavior. My algorithm for each shadow mapped pass is: render the scene to a FBO connected to a depth array texture from the POV of each light render the scene from the viewpoint and use vertex/frag shaders to compare the depths When I run my algorithm this way: render from point to FBO render from point to screen glutSwapBuffers() The normal vectors in the screen pass appear to be incorrect (inverted possibly). I'm pretty sure that's the issue because my diffuse lighting calculation is incorrect, but the material colors are correct, and the shadows appear in the correct places. So, it seems like the only thing that could be the culprit is the normals. However if I do render from point to FBO render from point to Screen glutSwapBuffers() //wrong here render from point to Screen glutSwapBuffers() the second pass is correct. I assume there's a problem with my framebuffer calls. Can anyone see what the problem is from the log below? Its from a bugle trace grepped for 'buffer' with a few edits to make it a little more clear. Thanks! [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfeb90 - { 1 }) [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfebac - { 2 }) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glDrawBuffer(GL_NONE) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) //start render to FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 2, 0) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 3, 0) [INFO] trace.call: glDrawBuffer(GL_COLOR_ATTACHMENT0) //bind to the FBO attached to a depth tex array for shadows [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the FBO I want the shadow mapped image rendered to [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //draw geometry //draw to screen pass //again shadow mapping FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the screen [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //finished, swap buffers [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //INCORRECT OUTPUT //second try at render to screen: [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) draw geometry [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //correct output

    Read the article

  • StructureMap Autowiring with two different instances of the same interface

    - by Lambda
    For the last two days, i tried my best to learn something about StructureMap, using an old project of mine as an concrete implementation example. I tried to simplify my question as much as possible. While i will post my examples in vb.net, answers with examples in C# are also okay. The project includes an interfaces called IDatabase which connects itself to a Database. The important part looks like this. Public Interface IDatabase Function Connect(ByVal ConnectionSettings As ConnectionSettings) As Boolean ReadOnly Property ConnectionOpen As Boolean [... more functions...] End Interface Public Class MSSQLConnection Implements IDatabase Public Function Connect(ByVal ConnectionSettings As ConnectionSettings) As Boolean Implements IDatabase.Connect [... Implementation ...] End Function [... more implementations...] End Class ConnectionSettings is a structure that has all the information needed to connect to a Database. I want to open the Database Connection once and use it for every single connection in the project, so i register a instance in the ObjectFactory. dim foo = ObjectFactory.GetInstance(Of MSSQLConnection)() dim bar as ConnectionSettings foo.connect(bar) ObjectFactory.Configure(Sub(x) x.For(Of IDatabase).Use(foo)) Up until this part, everything works like a charm. Now, i get to a point where i hav e classes that need an additional instance of IDatabase because they connect to a second database. Public Class ExampleClass Public Sub New(ByVal SameOldDatabase as IDatabase, ByVal NewDatabase as IDatabase) [...] Magic happens here [...] End Sub End Class I want this second IDatabase to behave much like the first one. I want it to use a concrete, single instance and want to connect it to a different database invoking Connect with a different ConnectionSettings. The problem is: While i'm pretty sure it's somewhow possible, (my initial idea was registering ExampleClass with alternative constructor arguments), i actually want to do it without registering ExampleClass. This probably involves more configuration, but i have no idea how to do it. So, basically, it comes down to this question: How do i configurate the ObjectFactory in a way that the autowiring always invokes the constructor with object Database1 for the first IDatabase parameter and object Database2 for the second one (if there is one?)

    Read the article

  • Implementing EAV pattern with Hibernate for User -> Settings relationship

    - by Trevor
    I'm trying to setup a simple EAV pattern in my web app using Java/Spring MVC and Hibernate. I can't seem to figure out the magic behind the hibernate XML setup for this scenario. My database table "SETUP" has three columns: user_id (FK) setup_item setup_value The database composite key is made up of user_id | setup_item Here's the Setup.java class: public class Setup implements CommonFormElements, Serializable { private Map data = new HashMap(); private String saveAction; private Integer speciesNamingList; private User user; Logger log = LoggerFactory.getLogger(Setup.class); public String getSaveAction() { return saveAction; } public void setSaveAction(String action) { this.saveAction = action; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public Integer getSpeciesNamingList() { return speciesNamingList; } public void setSpeciesNamingList(Integer speciesNamingList) { this.speciesNamingList = speciesNamingList; } public Map getData() { return data; } public void setData(Map data) { this.data = data; } } My problem with the Hibernate setup, is that I can't seem to figure out how to map out the fact that a foreign key and the key of a map will construct the composite key of the table... this is due to a lack of experience using Hibernate. Here's my initial attempt at getting this to work: <composite-id> <key-many-to-one foreign-key="id" name="user" column="user_id" class="Business.User"> <meta attribute="use-in-equals">true</meta> </key-many-to-one> </composite-id> <map lazy="false" name="data" table="setup"> <key column="user_id" property-ref="user"/> <composite-map-key class="Command.Setup"> <key-property name="data" column="setup_item" type="string"/> </composite-map-key> <element column="setup_value" not-null="true" type="string"/> </map> Any insight into how to properly map this common scenario would be most appreciated!

    Read the article

  • Windows forms application blocks after station lock

    - by Silviu
    We're having a serious issue at work. We've discovered that after the station where the client was running is locked/unlocked the client is blocked. No repaint. So the UI thread is blocked with something. Looking at the callstack of the UI thread (thread 0) using windbg we see that a UserPreferenceChanged event gets raised. It is marshalled through a WindowsFormsSynchronizationContext using it's controlToSend field to the UI. It gets blocked by a call to the marshalling control. The method called is MarshaledInvoke it builds a ThreadMethodEntry entry = new ThreadMethodEntry(caller, method, args, synchronous, executionContext); This entry is supposed to do the magic. The call is a synchronous call and because of that (still in the MarshaledInvoke of the Control class) the wait call is reached: if (!entry.IsCompleted) { this.WaitForWaitHandle(entry.AsyncWaitHandle); } The last thing that i can see on the stack is the WaitOne called on the previously mentioned AsyncWaitHandle. This is very annoying because having just the callstack of the runtime and not one of our methods being invoked we cannot really point to a bug in our code. I might be wrong, but I'm guessing that the marshaling control is not "marshaling" to the ui thread. But another one...i don't really know which one because the other threads are being used by us and are blocked...maybe this is the issue. But none of the other threads are running a message loop. This is very annoying. We had some issues in the past with marshaling controls to the right ui thread. That is because the first form that is constructed is a splash form. Which is not the main form. We used to use the main form to marshal call to the ui thread. But from time to time some calls would go to a non ui thread and some grids would broke with a big red X on them. I fixed this by creating a specific class: public class WindowsFormsSynchronizer { private static readonly WindowsFormsSynchronizationContext = new WindowsFormsSynchronizationContext(); //Methods are following that would build the same interface of the synchronization context. } This class gets build as one of the first objects in the first form being constructed. We've noticed some other strange thing. Looking at the heap there are 7 WindowsFormsSynchronizationContext objects. 6 of these have the same instance of controlToSend, and the other one has some different instance of controlToSend. This last one is the one that should marshal the calls to the UI. I don't have any other idea...maybe some of you guys had this same issue?

    Read the article

  • imagick showing script url instead of image

    - by Raz
    Hi, currently i'm trying to use imagick to generate some images without saving them on the server and then outputting to the browser, my method of choice was image magic with the imagick extension for php. I read the documentation, and i'm sure the package is installed on my machine (windows xp, with xampp). the class is installed imagick module enabled imagick module version 2.0.0-alpha imagick classes Imagick, ImagickDraw, ImagickPixel, ImagickPixelIterator ImageMagick version ImageMagick 6.3.3 04/21/07 Q16 http://www.imagemagick.org ImageMagick release date 04/21/07 ImageMagick Number of supported formats: 164 ImageMagick Supported formats A, ART, AVI, AVS, B, BIE, BMP, BMP2, BMP3, C, CACHE, CAPTION, CIN, CIP, CLIP, CLIPBOARD, CMYK, CMYKA, CUR, CUT, DCM, DCX, DFONT, DPS, DPX, EMF, EPDF, EPI, EPS, EPS2, EPS3, EPSF, EPSI, EPT, EPT2, EPT3, FAX, FITS, FRACTAL, FTS, G, G3, GIF, GIF87, GRADIENT, GRAY, HISTOGRAM, HTM, HTML, ICB, ICO, ICON, INFO, JBG, JBIG, JNG, JP2, JPC, JPEG, JPG, JPX, K, LABEL, M, M2V, MAP, MAT, MATTE, MIFF, MNG, MONO, MPC, MPEG, MPG, MSL, MSVG, MTV, MVG, NULL, O, OTB, OTF, PAL, PALM, PAM, PATTERN, PBM, PCD, PCDS, PCL, PCT, PCX, PDB, PDF, PFA, PFB, PGM, PGX, PICON, PICT, PIX, PJPEG, PLASMA, PNG, PNG24, PNG32, PNG8, PNM, PPM, PREVIEW, PS, PS2, PS3, PSD, PTIF, PWP, R, RAS, RGB, RGBA, RGBO, RLA, RLE, SCR, SCT, SFW, SGI, SHTML, STEGANO, SUN, SVG, SVGZ, TEXT, TGA, THUMBNAIL, TIFF, TILE, TIM, TTC, TTF, TXT, UIL, UYVY, VDA, VICAR, VID, VIFF, VST, WBMP, WMF, WMFWIN32, WMZ, WPG, X, XBM, XC, XCF, XPM, XV, XWD, Y, YCbCr, YCbCrA, YUV this is from the phpinfo so i know i have it installed, the thing is when i try to generate an image and save it, it works flawlessly, but when i try to output the image directly, i get the script url as an image $draw = new ImagickDraw(); $draw->setFont('AnkeCalligraph.TTF'); $draw->setFontSize(52); $draw->annotation(110, 110, "Hello World!"); $draw->annotation(50, 220, "Hello World!"); $canvas = new Imagick('./pictures/test_live.PNG'); $canvas->drawImage($draw); $canvas->setImageFormat('png'); header("Content-Type: image/png"); echo $canvas; this is the code used. if i use writeimage, then the file on the server is created with no problems. does anyone have any ideas what i'm doing wrong ?

    Read the article

  • Asynchronous subprocess on Windows

    - by Stigma
    First of all, the overall problem I am solving is a bit more complicated than I am showing here, so please do not tell me 'use threads with blocking' as it would not solve my actual situation without a fair, FAIR bit of rewriting and refactoring. I have several applications which are not mine to modify, which take data from stdin and poop it out on stdout after doing their magic. My task is to chain several of these programs. Problem is, sometimes they choke, and as such I need to track their progress which is outputted on STDERR. pA = subprocess.Popen(CommandA, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # ... some more processes make up the chain, but that is irrelevant to the problem pB = subprocess.Popen(CommandB, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=pA.stdout ) Now, reading directly through pA.stdout.readline() and pB.stdout.readline(), or the plain read() functions, is a blocking matter. Since different applications output in different paces and different formats, blocking is not an option. (And as I wrote above, threading is not an option unless at a last, last resort.) pA.communicate() is deadlock safe, but since I need the information live, that is not an option either. Thus google brought me to this asynchronous subprocess snippet on ActiveState. All good at first, until I implement it. Comparing the cmd.exe output of pA.exe | pB.exe, ignoring the fact both output to the same window making for a mess, I see very instantaneous updates. However, I implement the same thing using the above snippet and the read_some() function declared there, and it takes over 10 seconds to notify updates of a single pipe. But when it does, it has updates leading all the way upto 40% progress, for example. Thus I do some more research, and see numerous subjects concerning PeekNamedPipe, anonymous handles, and returning 0 bytes available even though there is information available in the pipe. As the subject has proven quite a bit beyond my expertise to fix or code around, I come to Stack Overflow to look for guidance. :) My platform is W7 64-bit with Python 2.6, the applications are 32-bit in case it matters, and compatibility with Unix is not a concern. I can even deal with a full ctypes or pywin32 solution that subverts subprocess entirely if it is the only solution, as long as I can read from every stderr pipe asynchronously with immediate performance and no deadlocks. :)

    Read the article

  • Rail test case fixtures not loading

    - by Deano
    Rails appears to not be loading any fixtures for unit or functional tests. I have a simple 'products.yml' that parses and appears correct: ruby: title: Programming Ruby 1.9 description: Ruby is the fastest growing and most exciting dynamic language out there. If you need to get working programs delivered fast, you should add Ruby to your toolbox. price: 49.50 image_url: ruby.png My controller functional test begins with: require 'test_helper' class ProductsControllerTest < ActionController::TestCase fixtures :products setup do @product = products(:one) @update = { :title => 'Lorem Ipsum' , :description => 'Wibbles are fun!' , :image_url => 'lorem.jpg' , :price => 19.95 } end According to the book, Rails should "magically" load the fixtures (as my test_helper.rb has fixtures :all in it. I also added the explicit fixtures load (seen above). Yes Rails complains: user @ host ~/Dropbox/Rails/depot > rake test:functionals (in /Somewhere/Users/user/Dropbox/Rails/depot) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/functional/products_controller_test.rb" Loaded suite /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader Started EEEEEEE Finished in 0.062506 seconds. 1) Error: test_should_create_product(ProductsControllerTest): NoMethodError: undefined method `products' for ProductsControllerTest:Class /test/functional/products_controller_test.rb:7 2) Error: test_should_destroy_product(ProductsControllerTest): NoMethodError: undefined method `products' for ProductsControllerTest:Class /test/functional/products_controller_test.rb:7 ... I did come across the other Rails test fixture question http://stackoverflow.com/questions/1547634/rails-unit-testing-doesnt-load-fixtures, but that leads to a plugin issue (something to do with the order of loading fixtures). BTW, I am developing on Mac OS X 10.6 with Rail 2.3.5 and Ruby 1.8.7, no additional plugins (beyond the base install). Any pointers on how to debug, why the magic of Rails appears to be failing here? Is it a version problem? Can I trace code into the libraries and find the answer? There are so many "mixin" modules I can't find where the fixtures method really lives.

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Silverlight ControlTemplate and F#

    - by akaphenom
    Has anybody had any success incorporating a Silverlight ControlTemplate into an F# Silverlight application. I am trying to add transitions to the Navgiation.Frame element and following along on with a C# example: http://www.davidpoll.com/2009/07/19/silverlight-3-navigation-adding-transitions-to-the-frame-control The downloaded source uses the MSBUILD:Compile option on the template XAML and the file is included as a "Page"... ILDASM doesn't show any object created for the XAML; In my project I incldued it as a "Resource" (same as I have done for my pages) and referenced it in app.xaml: <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> the TransitioningFrame.xaml is as follows: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:toolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Layout.Toolkit"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <toolkit:TransitioningContentControl Content="{TemplateBinding Content}" Cursor="{TemplateBinding Cursor}" Margin="{TemplateBinding Padding}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" HorizontalContentAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalContentAlignment="{TemplateBinding VerticalContentAlignment}" Transition="DefaultTransition" /> </Border> </ControlTemplate> </ResourceDictionary> My page objects all load their respective xaml with the follwoing code: type Page1() as this = inherit UriUserControl("/FSSilverlightApp;component/Page1.xaml") do Application.LoadComponent(this, base.uri) and somewhere in app startup: let p1 = new Page1() I donot have a comparable piece for the ControlTemplate - though I was hoping the application object and App.xaml would pull it in magically (as an aside, the reliance on this magic has made setting up a 100% f# silverlight application rather tricky - as nearly all the published articles I find are based around wizards and short cuts - very little on the acual plumbing - ugh). Any advice or thougts on the subject are appreciated.

    Read the article

  • PerlIO in Windows PowerShell and CMD.exe

    - by Evan Carroll
    Apparently, a Perl script I have results in two different output files depending on if I run it under Windows PowerShell, or cmd.exe. The script can be found at the bottom of this question. The file handle is opened with IO::File, I believe that PerlIO is doing some screwy stuff. It seems as if under cmd.exe the encoding chosen is much more compact encoding (4.09 KB), as compared to PowerShell which generates a file nearly twice the size (8.19 KB). This script takes a shell script and generates a Windows batch file. It seems like the one generated under cmd.exe is just regular ASCII (1 byte character), while the other one appears to be UTF-16 (first two bytes FF FE) Can someone verify and explain why PerlIO works differently under Windows Powershell than cmd.exe? Also, how do I explicitly get an ASCII-magic PerlIO filehandle using IO::File? Currently, only the file generated with cmd.exe is executable. The UTF-16 .bat (I think that's the encoding) is not executable by either PowerShell or cmd.exe. BTW, we're using Perl 5.12.1 for MSWin32 #!/usr/bin/env perl use strict; use warnings; use File::Spec; use IO::File; use IO::Dir; use feature ':5.10'; my $bash_ftp_script = File::Spec->catfile( 'bin', 'dm-ftp-push' ); my $fh = IO::File->new( $bash_ftp_script, 'r' ) or die $!; my @lines = grep $_ !~ /^#.*/, <$fh>; my $file = join '', @lines; $file =~ s/ \\\n/ /gm; $file =~ tr/'\t/"/d; $file =~ s/ +/ /g; $file =~ s/\b"|"\b/"/g; my @singleLnFile = grep /ncftp|echo/, split $/, $file; s/\$PWD\///g for @singleLnFile; my $dh = IO::Dir->new( '.' ); my @files = grep /\.pl$/, $dh->read; say 'echo off'; say "perl $_" for @files; say for @singleLnFile; 1;

    Read the article

  • How can I inject Javascript (including Prototype.js) in other sites without cluttering the global na

    - by Daniel Magliola
    I'm currently on a project that is a big site that uses the Prototype library, and there is already a humongous amount of Javascript code. We're now working on a piece of code that will get "injected" into other people's sites (picture people adding a <script> tag in their sites) which will then run our code and add a bunch of DOM elements and functionality to their site. This will have new pieces of code, and will also reuse a lot of the code that we use on our main site. The problem I have is that it's of course not cool to just add a <script> that will include Prototype in people's pages. If we do that in a page that's already using ANY framework, we're guaranteed to screw everything up. jQuery gives us the option to "rename" the $ object, so it could handle this situation decently, except obviously for the fact that we're not using jQuery, so we'd have to migrate everything. Right now i'm contemplating a number of ugly choices, and I'm not sure what's best... Rewrite everything to use jQuery, with a renamed $ object everywhere. Creating a "new" Prototype library with only the subset we'd be using in "injected" code, and renaming $ to something else. Then again I'd have to adapt the parts of my code that would be shared somehow. Not using a library at all in injected code, to keep it as clean as possible, and rewriting the shared code to use no library at all. This would obviously degenerate into us creating our own frankenstein of a library, which is probably the worst case scenario ever. I'm wondering what you guys think I could do, and also whether there's some magic option that would solve all my problems... For example, do you think I could use something like Caja / Cajita to sandbox my own code and isolate it from the rest of the site, and have Prototype inside of there? Or am I completely missing the point with that? I also read once about a technique for bookmarklets, were you add your code like this: (function() { /* your code */ })(); And then your code is all inside your anonymous function and you haven't touched the global namespace at all. Do you think I could make one file containing: (function() { /* Full Code of the Prototype file here */ /* All my code that will run in the "other" site */ InitializeStuff_CreateDOMElements_AttachEventHandlers(); })(); Would that work? Would it accomplish the objective of not cluttering the global namespace, and not killing the functionality on a site that uses jQuery, for example? Or is Prototype too complex somehow to isolate it like that? (NOTE: I think I know that that would create closures everywhere and that's slower, but I don't care too much about performance, my code is not doing anything that complex)

    Read the article

  • How can I make Swig correctly wrap a char* buffer that is modified in C as a Java Something-or-other

    - by Ukko
    I am trying to wrap some legacy code for use in Java and I was quite happy to see that Swig was able to handle the header file and it generate a great wrapper that almost works. Now I am looking for the deep magic that will make it really work. In C I have a function that looks like this DLL_IMPORT int DustyVoodoo(char *buff, int len, char *curse); This integer returned by this function is an error code in case it fails. The arguments are buff is a character buffer len is the length of the data in the buffer curse the another character buffer that contains the result of calling DustyVoodoo So, you can see where this is going, the result is actually coming back via the third argument. Also len is confusing since it may be the length of both buffers, they are always allocated as being the same size in calling code but given what DustyVoodoo does I don't think that they need be the same. To be safe both buffers should be the same size in practice, say 512 chars. The C code generated for the binding is as follows: SWIGEXPORT jint JNICALL Java_pemapiJNI_DustyVoodoo(JNIEnv *jenv, jclass jcls, jstring jarg1, jint jarg2, jstring jarg3) { jint jresult = 0 ; char *arg1 = (char *) 0 ; int arg2 ; char *arg3 = (char *) 0 ; int result; (void)jenv; (void)jcls; arg1 = 0; if (jarg1) { arg1 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg1, 0); if (!arg1) return 0; } arg2 = (int)jarg2; arg3 = 0; if (jarg3) { arg3 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg3, 0); if (!arg3) return 0; } result = (int)PemnEncrypt(arg1,arg2,arg3); jresult = (jint)result; if (arg1) (*jenv)->ReleaseStringUTFChars(jenv, jarg1, (const char *)arg1); if (arg3) (*jenv)->ReleaseStringUTFChars(jenv, jarg3, (const char *)arg3); return jresult; } It is correct for what it does; however, it misses the fact that cursed is not just an input, it is altered by the function and should be returned as an output. It also does not know that the java Strings are really buffers and should be backed by a suitably sized array. I think that Swig can do the right thing here, I just can't figure out from the documentation how to tell Swig what it needs to know. Any typemap masers in the house?

    Read the article

  • Webdriver: Tests crash with internet explorer7 with error Modal dialog present

    - by user1207450
    Following tests is automated by using java and selenium-server-standalone-2.20.0.jar. The test crashes with the error: Page title is: cheese! - Google Search Starting browserTest 2922 [main] INFO org.apache.http.impl.client.DefaultHttpClient - I/O exception (org.apache.http.NoHttpResponseException) caught when processing request: The target server failed to respond 2922 [main] INFO org.apache.http.impl.client.DefaultHttpClient - Retrying request Exception in thread "main" org.openqa.selenium.UnhandledAlertException: Modal dialog present (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 1.20 seconds Build info: version: '2.20.0', revision: '16008', time: '2012-02-27 19:03:04' System info: os.name: 'Windows XP', os.arch: 'x86', os.version: '5.1', java.version: '1.6.0_24' Driver info: driver.version: InternetExplorerDriver at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.openqa.selenium.remote.ErrorHandler.createThrowable(ErrorHandler.java:170) at org.openqa.selenium.remote.ErrorHandler.throwIfResponseFailed(ErrorHandler.java:129) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:438) at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:139) at org.openqa.selenium.ie.InternetExplorerDriver.setup(InternetExplorerDriver.java:91) at org.openqa.selenium.ie.InternetExplorerDriver.<init>(InternetExplorerDriver.java:48) at com.pwc.test.java.InternetExplorer7.browserTest(InternetExplorer7.java:34) at com.pwc.test.java.InternetExplorer7.main(InternetExplorer7.java:27) Test Class: package com.pwc.test.java; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebDriverBackedSelenium; import org.openqa.selenium.WebElement; import org.openqa.selenium.htmlunit.HtmlUnitDriver; import org.openqa.selenium.ie.InternetExplorerDriver; import com.thoughtworks.selenium.Selenium; public class InternetExplorer7 { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub WebDriver webDriver = new HtmlUnitDriver(); webDriver.get("http://www.google.com"); WebElement webElement = webDriver.findElement(By.name("q")); webElement.sendKeys("cheese!"); webElement.submit(); System.out.println("Page title is: "+webDriver.getTitle()); browserTest(); } public static void browserTest() { System.out.println("Starting browserTest"); String baseURL = "http://www.mail.yahoo.com"; WebDriver driver = new InternetExplorerDriver(); driver.get(baseURL); Selenium selenium = new WebDriverBackedSelenium(driver, baseURL); selenium.windowMaximize(); WebElement username = driver.findElement(By.id("username")); WebElement password = driver.findElement(By.id("passwd")); WebElement signInButton = driver.findElement(By.id(".save")); username.sendKeys("myusername"); password.sendKeys("magic"); signInButton.click(); driver.close(); } } I don't see any modal dialog when I launched the IE7/8 browser manually. What could be causing this?

    Read the article

  • Translate Java class with static attributes and Annotation to Scala equivalent

    - by ifischer
    I'm currently trying to "translate" the following Java class to an equivalent Scala class. It's part of a JavaEE6-application and i need it to use the JPA2 MetaModel. import javax.persistence.metamodel.SingularAttribute; import javax.persistence.metamodel.StaticMetamodel; @StaticMetamodel(Person.class) public class Person_ { public static volatile SingularAttribute<Person, String> name; } A dissassembling of the compiled class file reveals the following information for the compiled file: > javap Person_.class : public class model.Person_ extends java.lang.Object{ public static volatile javax.persistence.metamodel.SingularAttribute name; public model.Person_(); } So now i need an equivalent Scala file that has the same structure, as JPA depends on it, cause it resolves the attributes by reflection to make them accessible at runtime. So the main problem i think is that the attribute is static, but the Annotation has to be on an (Java)Object (i guess) My first naive attempt to create a Scala equivalent is the following: @StaticMetamodel(classOf[Person]) class Person_ object Person_ { @volatile var name:SingularAttribute[Person, String] = _; } But the resulting classfile is far away from the Java one, so it doesn't work. When trying to access the attributes at runtime, e.g. "Person_.firstname", it resolves to null, i think JPA can't do the right reflection magic on the compiled classfile (the Java variant resolves to an instance of org.hibernate.ejb.metamodel.SingularAttributeImpl at runtime). > javap Person_.class : public class model.Person_ extends java.lang.Object implements scala.ScalaObject{ public static final void name_$eq(javax.persistence.metamodel.SingularAttribute); public static final javax.persistence.metamodel.SingularAttribute name(); public model.Person_(); } > javap Person_$.class : public final class model.Person__$ extends java.lang.Object implements scala.ScalaObject public static final model.Person__$ MODULE$; public static {}; public javax.persistence.metamodel.SingularAttribute name(); public void name_$eq(javax.persistence.metamodel.SingularAttribute); } So now what i'd like to know is if it's possible at all to create a Scala equivalent of the Java class? It seems to me that it's absolutely not, but maybe there is a workaround or something (except just using Java, but i want my app to be in Scala where possible) Any ideas, anyone? Thanks in advance!

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • Giving event control back to xaml/callee to access check/uncheck

    - by dovholuk
    Hi All, I've scoured the Internet but my search-foo must not work if it's out there cause I just can't find the proper search terms to get me the answer I am looking for. So I turn to the experts here as a last resort to point me in the right direction. What I'm trying to do is create a composite control of a text box and a list box but I want to allow the control consumer to decide what to do when say the checkbox is checked/unchecked and I just can't figure it out... Maybe i'm going about it all wrong. What I've done thus far is to: create a custom control that extends ListBox expose a custom DP named "Text" (for the Text box but it's not the important part) craft the generic.xaml so that the list items have a default ItemTemplate/DataTemplate inside the DataTemplate I'm trying to set the "Checked" or "Unchecked" events expose 'wrapper' events as DPs in the custom control that would get 'set' via the template when instatiated As soon as I try something like the following (inside generic.xaml): <DataTemplate> <...> <CheckBox Checked="{TemplateBinding MyCheckedDP}"/> <...> </DataTemplate> I get runtime exceptions, the designer - vs2010 - pukes out a LONG list of errors that are all very similar and nothing I do can make it work. I went so far as to try using the VisualTreeHelper but no magic combination I could find would work nor would it allow me to traverse the tree because when the OnApplyTemplate method fires the listbox items don't exist yet and aren't in the tree. So hopefully this all makes sense but if not please let me know and I'll edit the post for clarifications. Thanks to all for any pointers or thoughts... Like I said maybe I'm heading about it in the wrong way from the start... EDIT (Request for xaml) generic.xaml datatemplate: <DataTemplate > <StackPanel Orientation="Horizontal" > <local:FilterCheckbox x:Name="chk"> <TextBlock Text="{Binding Path=Display}" /> </local:FilterCheckbox> </StackPanel> </DataTemplate> usercontrol.xaml (invocation of custom control) <local:MyControl FancyName="This is the fancy name" ItemChecked="DoThisWhenACheckboxIsChecked" <-- this is where the consumer "ties" to the checkbox events ItemsSource="{Binding Source={StaticResource someDataSource}}" />

    Read the article

  • bmp image header doubts

    - by vikramtheone
    Hi Guys, I'm doing a project where I have to make use of the pixel information of a bmp image. So, I'm gathering the image information by reading the header information of the input .bmp image. I'm quite successful with everything but one thing bothers me, can any one here clarify it? The header information of my .bmp image is as follows (My test image is very tiny and gray scale)- BMP File header File size 1210 Offset information 1078 BMP Information header Image Header Size 40 Image Size 132 Image width 9 Image height 11 Image bits_p_p 8 So, from the .bmp header I see that the image size is 132 (bytes) but when I multiply the width and height it is only 99, how is such a thing possible? I'm confident with 132 bytes because when I subtract the Offset value with the File Size value, I get 132(1210 - 1078 = 132) and also when I manually count the number of bytes (In a HEX editor) from the point 1078 or 436h (End of the offset field), there are exactly 132 bytes of pixel information. So, why is there a disparity between the size filed and the (width x height)? My future implementations are dependent on the image width and height information and not on Image size information. So, I have to understand thoroughly whats going on here. My understanding of the header should be clearly wrong... I guess!!! Help!!! Regards Vikram My bmp structures are a as follows - typedef struct bmpfile_magic { short magic; }BMP_MAGIC_NUMBER; typedef struct bmpfile_header { uint32_t filesz; uint16_t creator1; uint16_t creator2; uint32_t bmp_offset; }BMP_FILE_HEADER; typedef struct { uint32_t header_sz; uint32_t width; uint32_t height; uint16_t nplanes; uint16_t bitspp; uint32_t compress_type; uint32_t bmp_bytesz; uint32_t hres; uint32_t vres; uint32_t ncolors; uint32_t nimpcolors; } BMP_INFO_HEADER;

    Read the article

  • Hotfixing Code running inside Web Container with Groovy

    - by raoulsson
    I have a webapp running that has a bug. I know how to fix it in the sources. However I cannot redeploy the app as I would have to take it offline to do so. (At least not right now). I now want to fix the code "at runtime". Surgery on the living object, so to speak. The app is implemented in Java and is build on top of Seam. I have added a Groovy Console to the app previous to the last release. (A way to run arbitrary code at runtime) The normal way of adding behaviour to a class with Groovy would be similar to this: String.metaClass.foo= { x -> x * x } println "anything".foo(3) This code added the method foo to java.lang.String and prints 9. I can do the same thing with classes running inside my webapp container. New instances will thereafter show the same behaviour: com.my.package.SomeService.metaClass.foo= { x -> x * x } def someService = new com.my.package.SomeService() println someService.foo(3) Works as excpected. All good so far. My problem is now that the container, the web framework, Seam in this case, has already instantiated and cached the classes that I would like to manipulate (that is change their behaviour to reflect my bug fix). Ideally this code would work: com.my.package.SomeService.metaClass.foo= { x -> x * x } def x = org.jboss.seam.Component.getInstance(com.my.package.SomeService) println x.foo(3) However the instantiation of SomeService has already happened and there is no effect. Thus I need a way to make my changes "sticky". Has the groovy magic gone after my script has been run? Well, after logging out and in again, I can run this piece of code and get the expected result: def someService = new com.my.package.SomeService() println someService.foo(3) So the foo method is still around and it looks like my change has been permanent... So I guess the question that remains is how to force Seam to re-instantiate all its components and/or how to permanently make the change on all living instances...?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >