Search Results

Search found 17621 results on 705 pages for 'just my correct opinion'.

Page 649/705 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • DriverManager always returns my custom driver regardless of the connection URL

    - by JGB146
    I am writing a driver to act as a wrapper around two separate MySQL connections (to distributed databases). Basically, the goal is to enable interaction with my driver for all applications instead of requiring the application to sort out which database holds the desired data. Most of the code for this is in place, but I'm having a problem in that when I attempt to create connections via the MySQL Driver, the DriverManager is returning an instance of my driver instead of the MySQL Driver. I'd appreciate any tips on what could be causing this and what could be done to fix it! Below is a few relevant snippets of code. I can provide more, but there's a lot, so I'd need to know what else you want to see. First, from MyDriver.java: public MyDriver() throws SQLException { DriverManager.registerDriver(this); } public Connection connect(String url, Properties info) throws SQLException { try { return new MyConnection(info); } catch (Exception e) { return null; } } public boolean acceptsURL(String url) throws SQLException { if (url.contains("jdbc:jgb://")) { return true; } return false; } It is my understanding that this acceptsURL function will dictate whether or not the DriverManager deems my driver a suitable fit for a given URL. Hence it should only be passing connections from my driver if the URL contains "jdbc:jgb://" right? Here's code from MyConnection.java: Connection c1 = null; Connection c2 = null; /** *Constructors */ public DDBSConnection (Properties info) throws SQLException, Exception { info.list(System.out); //included for testing Class.forName("com.mysql.jdbc.Driver").newInstance(); String url1 = "jdbc:mysql://server1.com/jgb"; String url2 = "jdbc:mysql://server2.com/jgb"; this.c1 = DriverManager.getConnection( url1, info.getProperty("username"), info.getProperty("password")); this.c2 = DriverManager.getConnection( url2, info.getProperty("username"), info.getProperty("password")); } And this tells me two things. First, the info.list() call confirms that the correct user and password are being sent. Second, because we enter an infinite loop, we see that the DriverManager is providing new instances of my connection as matches for the mysql URLs instead of the desired mysql driver/connection. FWIW, I have separately tested implementations that go straight to the mysql driver using this exact syntax (al beit only one at a time), and was able to successfully interact with each database individually from a test application outside of my driver.

    Read the article

  • DataAnnotation attributes buddy class strangeness - ASP.NET MVC

    - by JK
    Given this POCO class that was automatically generated by an EntityFramework T4 template (has not and can not be manually edited in any way): public partial class Customer { [Required] [StringLength(20, ErrorMessage = "Customer Number - Please enter no more than 20 characters.")] [DisplayName("Customer Number")] public virtual string CustomerNumber { get;set; } [Required] [StringLength(10, ErrorMessage = "ACNumber - Please enter no more than 10 characters.")] [DisplayName("ACNumber")] public virtual string ACNumber{ get;set; } } Note that "ACNumber" is a badly named database field, so the autogenerator is unable to generate the correct display name and error message which should be "Account Number". So we manually create this buddy class to add custom attributes that could not be automatically generated: [MetadataType(typeof(CustomerAnnotations))] public partial class Customer { } public class CustomerAnnotations { [NumberCode] // This line does not work public virtual string CustomerNumber { get;set; } [StringLength(10, ErrorMessage = "Account Number - Please enter no more than 10 characters.")] [DisplayName("Account Number")] public virtual string ACNumber { get;set; } } Where [NumberCode] is a simple regex based attribute that allows only digits and hyphens: [AttributeUsage(AttributeTargets.Property)] public class NumberCodeAttribute: RegularExpressionAttribute { private const string REGX = @"^[0-9-]+$"; public NumberCodeAttribute() : base(REGX) { } } NOW, when I load the page, the DisplayName attribute works correctly - it shows the display name from the buddy class not the generated class. The StringLength attribute does not work correctly - it shows the error message from the generated class ("ACNumber" instead of "Account Number"). BUT the [NumberCode] attribute in the buddy class does not even get applied to the AccountNumber property: foreach (ValidationAttribute attrib in prop.Attributes.OfType<ValidationAttribute>()) { // This collection correctly contains all the [Required], [StringLength] attributes // BUT does not contain the [NumberCode] attribute ApplyValidation(generator, attrib); } Why does the prop.Attributes.OfType<ValidationAttribute>() collection not contain the [NumberCode] attribute? NumberCode inherits RegularExpressionAttribute which inherits ValidationAttribute so it should be there. If I manually move the [NumberCode] attribute to the autogenerated class, then it is included in the prop.Attributes.OfType<ValidationAttribute>() collection. So what I don't understand is why this particular attribute does not work in when in the buddy class, when other attributes in the buddy class do work. And why this attribute works in the autogenerated class, but not in the buddy. Any ideas? Also why does DisplayName get overriden by the buddy, when StringLength does not?

    Read the article

  • Is it possible to gzip and upload this string to Amazon S3 without ever being written to disk?

    - by BigJoe714
    I know this is probably possible using Streams, but I wasn't sure the correct syntax. I would like to pass a string to the Save method and have it gzip the string and upload it to Amazon S3 without ever being written to disk. The current method inefficiently reads/writes to disk in between. The S3 PutObjectRequest has a constructor with InputStream input as an option. import java.io.*; import java.util.zip.GZIPOutputStream; import com.amazonaws.auth.PropertiesCredentials; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.PutObjectRequest; public class FileStore { public static void Save(String data) throws IOException { File file = File.createTempFile("filemaster-", ".htm"); file.deleteOnExit(); Writer writer = new OutputStreamWriter(new FileOutputStream(file)); writer.write(data); writer.flush(); writer.close(); String zippedFilename = gzipFile(file.getAbsolutePath()); File zippedFile = new File(zippedFilename); zippedFile.deleteOnExit(); AmazonS3 s3 = new AmazonS3Client(new PropertiesCredentials( new FileInputStream("AwsCredentials.properties"))); String bucketName = "mybucket"; String key = "test/" + zippedFile.getName(); s3.putObject(new PutObjectRequest(bucketName, key, zippedFile)); } public static String gzipFile(String filename) throws IOException { try { // Create the GZIP output stream String outFilename = filename + ".gz"; GZIPOutputStream out = new GZIPOutputStream(new FileOutputStream(outFilename)); // Open the input file FileInputStream in = new FileInputStream(filename); // Transfer bytes from the input file to the GZIP output stream byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } in.close(); // Complete the GZIP file out.finish(); out.close(); return outFilename; } catch (IOException e) { throw e; } } }

    Read the article

  • Is this scatter-brained workflow realizable in Git?

    - by Luke Maurer
    This is what I'd like my workflow to look like at a conceptual level: I hack on my new feature for a while I notice a typo in a comment I change it Since the typo is completely unrelated to anything else, I put that change in a pile of comment fixes I keep working on the code I realize I need to flesh out a few utility functions I do so I put that change in its own pile Steps 2, 3, and 4 each repeat throughout the day I finish the new feature and put the changes for that feature in a pile I push nice patches upstream: One with the new feature, a few for the other tweaks, and one with a bunch of comment fixes if enough have accumulated Since I'm both lazy and a perfectionist, I want to be able to do some things out of order: I might correct a typo but forget to put it in the comment fix pile; when I prepare the upstream patches (I'm using git-svn, so I need to be pretty deliberate about these), I'll then pull out the comment fixes at that point. I might forget to separate things altogether until the very end. But I might /also/ have committed some of the piles along the way (sorry, the metaphor is breaking down …). This is all rather like just using Eclipse changesets with SVN, only I can have different changes to the same file in different piles (having to disentangle changes into different commits is what motivated me to move to git-svn, in fact …), and with Git I can have my full discombobulated change history, experimental branches and all, but still make a nice, neat patch. I've just recently started with Git after having wanted to for a good while, and I'm quite happy so far. The biggest way in which the above workflow doesn't really map into Git, though, is that a “bin” can't really be just a local branch, since the working tree only ever reflects the state of a single branch. Or maybe the Git index is a “pile,” and what I want is to have more than one somehow (effectively). I can think of a few ways to approximate what I want (maybe creative use of stash? Intricate stash-checkout-merge dances?), but my grasp on Git isn't solid enough to be sure of how best to put all the pieces together. It's said that Git is more a toolkit than a VCS, so I guess the question comes down to: How do I build this thing with these tools?

    Read the article

  • Cutom event dispatchment location

    - by Martino Wullems
    Hello, I've been looking into custom event (listeners) for quite some time, but never succeeded in making one. There are so many different mehods, extending the Event class, but also Extending the EventDispatcher class, very confusing! I want to settle with this once and for all and learn the appriopate technique. package{ import flash.events.Event; public class CustomEvent extends Event{ public static const TEST:String = 'test'; //what exac is the purpose of the value in the string? public var data:Object; public function CustomEvent(type:String, bubbles:Boolean = false, cancelable:Boolean = false, data:Object = null):void { this.data = data; super(); } } } As far as I know a custom class where you set the requirements for the event to be dispatched has to be made: package { import flash.display.MovieClip; public class TestClass extends MovieClip { public function TestClass():void { if (ConditionForHoldToComplete == true) { dispatchEvent(new Event(CustomEvent.TEST)); } } } } I'm not sure if this is correct, but it should be something along the lines of this. Now What I want is something like a mouseevent, which can be applied to a target and does not require a specific class. It would have to work something like this: package com.op_pad._events{ import flash.events.MouseEvent; import flash.utils.Timer; import flash.events.TimerEvent; import flash.events.EventDispatcher; import flash.events.Event; public class HoldEvent extends Event { public static const HOLD_COMPLETE:String = "hold completed"; var timer:Timer; public function SpriteEvent(type:String, bubbles:Boolean=true, cancelable:Boolean=false) { super( type, bubbles, cancelable ); timer = new Timer(1000, 1); //somehow find the target where is event is placed upon -> target.addEventlistener target.addEventListener(MouseEvent.MOUSE_DOWN, startTimer); target.addEventListener(MouseEvent.MOUSE_UP, stopTimer); } public override function clone():Event { return new SpriteEvent(type, bubbles, cancelable); } public override function toString():String { return formatToString("MovieEvent", "type", "bubbles", "cancelable", "eventPhase"); } ////////////////////////////////// ///// c o n d i t i o n s ///// ////////////////////////////////// private function startTimer(e:MouseEvent):void { timer.start(); timer.addEventListener(TimerEvent.TIMER_COMPLETE, complete); } private function stopTimer(e:MouseEvent):void { timer.stop() } public function complete(e:TimerEvent):void { dispatchEvent(new HoldEvent(HoldEvent.HOLD_COMPLETE)); } } } This obviously won't work, but should give you an idea of what I want to achieve. This should be possible because mouseevent can be applied to about everything.The main problem is that I don't know where I should set the requirements for the event to be executed to be able to apply it to movieclips and sprites. Thanks in advance

    Read the article

  • How to force VS 2010 to skip "builds" of projects which haven't changed?

    - by Ladislav Mrnka
    Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code. Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and know without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time. That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B. The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects. Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ... I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute). To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars. I really don't want to get answers like: Make a separate solution Unload projects you don't need etc. I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.

    Read the article

  • working on lists in python

    - by owca
    I'm trying to make a small modification to django lfs project, that will allow me to deactivate products with no stocks. Unfortunatelly I'm just beginning to learn python, so I have big trouble with its syntax. That's what I'm trying to do. I'm using method 'has_variants' which returns true if product has any. Then I'm building a list from variants for this product. Next for every product in this list (I've called it 'set') I check it's stock and set bool variable 'inactive' to true if product has no stocks and to false if there are any. Finally if 'inactive' is false I'm setting self.active to 0. Code fails in line with: set[] = s How to correct it ? def deactivate(self): """If there are no stocks, deactivate the product. Used in last step of checkout. """ if self.has_variants(): for s in self.variants.filter(active=True): set[] = s for var in set: if var.get_stock_amount() == 0: inactive = True else: inactive = False else: if self.get_stock_amount() == 0: inactive = True if inactive: self.active = False return 0 error log : Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/__i nit__.py", line 362, in execute_manager utility.execute() File "/home/purplecow/rails/purpledev/site-packages/django/core/management/__i nit__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/bas e.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/bas e.py", line 213, in execute translation.activate('en-us') File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/_ _init__.py", line 73, in activate return real_activate(language) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/_ _init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 180, in _fetch app = import_module(appname) File "/home/purplecow/rails/purpledev/site-packages/django/utils/importlib.py" , line 35, in import_module __import__(name) File "/home/purplecow/rails/purpledev/lfs/caching/__init__.py", line 1, in <mo dule> from listeners import * File "/home/purplecow/rails/purpledev/lfs/caching/listeners.py", line 10, in < module> from lfs.cart.models import Cart File "/home/purplecow/rails/purpledev/lfs/cart/models.py", line 8, in <module> from lfs.catalog.models import Product File "/home/purplecow/rails/purpledev/lfs/catalog/__init__.py", line 1, in <mo dule> from listeners import * File "/home/purplecow/rails/purpledev/lfs/catalog/listeners.py", line 5, in <m odule> from lfs.catalog.models import PropertyGroup File "/home/purplecow/rails/purpledev/lfs/catalog/models.py", line 589 set[] = s ^ SyntaxError: invalid syntax

    Read the article

  • GAE Datastore: persisting referenced objects

    - by David
    Two "I'm sorries" to begin with: 1) I've looked for a solution (here, and elsewhere), and couldn't find the answer. 2) English is not my mother tongue, so I may have some typos and the sort - please ignore them. To the point: I am trying to persist Java objects to the GAE datastore. I am not sure as to how to persist object having ("non-trivial") referenced object. That is, assume I have the following. public class Father { String name; int age; Vector<Child> offsprings; //this is what I call "non-trivial" reference //ctor, getters, setters... } public class Child { String name; int age; Father father; //this is what I call "non-trivial" reference //ctor, getters, setters... } The name field is unique in each type domain, and is considered a Primary-Key. In order to persist the "trivial" (String, int) fields, all I need is to add the correct annotation. So far so good. However, I don't understand how should I persist the home-brewed (Child, Father) types referenced. Should I: Convert each such reference to hold the Primary-Key (a name String, in this example) instead of the "actual" object. So, Vector<Child> offsprings; becomes Vector<String> offspringsNames; ? If that is the case, how do I handle the object at run-time? Do I just query for the Primary-Key from Class.getName, to retrieve the refrenced objects? Convert each such reference to hold the actual Key provided to me by the Datastore upon the proper put() operation? So, Vector<Child> offsprings; becomes Vector<Key> offspringsHashKeys; ? I would very much appreciate all kinds of comments. I have read ALL the offical relevant GAE docs/example. Throughout, they always persist "trivial" references, natively supported by the Datastore (e.g. in the Guestbook example, only Strings, and Longs). Many thanks, David

    Read the article

  • Clickin issue in Flash and XML

    - by ortho
    Hi, I have the php backend that displays an xml page with data for flash consuming. Flash takes it and creates a textfields dynamicaly based on this information. I have a few items in menu on top and when I click one of them, data is taken from php and everything is displayed in scroll in flash. The problem is that if I click too fast between menu items, then I get buggy layout. The text (only the text) is becoming part of the layout and is displayed no metter what item in menu I am currently in and only refreshing the page helps. var myXML:XML = new XML(); myXML.ignoreWhite=true; myXML.load("/getBud1.php"); myXML.onLoad = function(success){ if (success){ var myNode = this.firstChild.childNodes; var myTxt:Array = Array(0); for (var i:Number = 0; i<myNode.length; i++) { myTxt[i] = "text"+i+"content"; createTextField(myTxt[i],i+1,65,3.5,150, 20); var pole = eval(myTxt[i]); pole.embedFonts = true; var styl:TextFormat = new TextFormat(); styl.font = "ArialFont"; pole.setNewTextFormat(styl); pole.text = String(myNode[i].childNodes[1].firstChild.nodeValue); pole.wordWrap = true; pole.autoSize = "left"; if(i > 0) { var a:Number = eval(myTxt[i-1])._height + eval(myTxt[i-1])._y + 3; pole._y = a; } attachMovie("kropka2", "test"+i+"th", i+1000); eval("test"+i+"th")._y = pole._y + 5; eval("test"+i+"th")._x = 52; } } } I tried to load the info and ceate text fields from top frame and then refer to correct place by instance names string e.g. budData.dataHolder.holder.createTextField , but then when I change between items in menu the text dissapears completely untill I refresh the page. Please help

    Read the article

  • php switch statement error on int = 0

    - by Jagdeep Singh
    I am having a problem in php switch case. When i set $number=0 it should run very first case but here this code returns 10-20K that is in second case. I checked comparison operators, tested them in if else case they return correct values but here first case do not run on $number=0 Why is this happening ? php consider 0 as false or something wrong in code ? Link to codepad paste http://codepad.org/2glDh39K also here is the code <?php $number = 0; switch ($number) { case ($number <= 10000): echo "0-10K"; break; case ($number > 10000 && $number <= 20000): echo "10-20K"; break; case ($number > 20000 && $number <= 30000): echo "20-30K"; break; case ($number > 30000 && $number <= 40000): echo "30-40K"; break; case ($number > 40000 && $number <= 50000): echo "40-50K"; break; case ($number > 50000 && $number <= 60000): echo "50-60K"; break; case ($number > 60000 && $number <= 70000): echo "60-70K"; break; case ($number > 70000 && $number <= 80000): echo "70-80K"; break; case ($number > 80000 && $number <= 90000): echo "80-90K"; break; case ($number > 90000): echo "90K+"; break; default: //default echo "N/A"; break; } ?>

    Read the article

  • Why does SFINAE not apply to this?

    - by Simon Buchan
    I'm writing some simple point code while trying out Visual Studio 10 (Beta 2), and I've hit this code where I would expect SFINAE to kick in, but it seems not to: template<typename T> struct point { T x, y; point(T x, T y) : x(x), y(y) {} }; template<typename T, typename U> struct op_div { typedef decltype(T() / U()) type; }; template<typename T, typename U> point<typename op_div<T, U>::type> operator/(point<T> const& l, point<U> const& r) { return point<typename op_div<T, U>::type>(l.x / r.x, l.y / r.y); } template<typename T, typename U> point<typename op_div<T, U>::type> operator/(point<T> const& l, U const& r) { return point<typename op_div<T, U>::type>(l.x / r, l.y / r); } int main() { point<int>(0, 1) / point<float>(2, 3); } This gives error C2512: 'point<T>::point' : no appropriate default constructor available Given that it is a beta, I did a quick sanity check with the online comeau compiler, and it agrees with an identical error, so it seems this behavior is correct, but I can't see why. In this case some workarounds are to simply inline the decltype(T() / U()), to give the point class a default constructor, or to use decltype on the full result expression, but I got this error while trying to simplify an error I was getting with a version of op_div that did not require a default constructor*, so I would rather fix my understanding of C++ rather than to just do what works. Thanks! *: the original: template<typename T, typename U> struct op_div { static T t(); static U u(); typedef decltype(t() / u()) type; }; Which gives error C2784: 'point<op_div<T,U>::type> operator /(const point<T> &,const U &)' : could not deduce template argument for 'const point<T> &' from 'int', and also for the point<T> / point<U> overload.

    Read the article

  • Shift+Tab not working in TreeView control

    - by Christian
    I cannot get backwards navigation using Shift+Tab to work in a TreeView that contains TextBoxs, forward navigation using Tab works fine and jump from TextBox to TextBox inside the TreeView. Anytime Shift+Tab is used when one of the TextBoxes inside the TreeView, then the focus is move to the previous control outside the TreeView, instead of the previous control inside the TreeView. Also its only Shift+Tab navigation that are not working correctly, Ctrl+Shift+Tab work as expected and in the correct order. Any suggestions to what I'm doing wrong? Example code: <Window x:Class="TestTabTreeView.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <Style TargetType="TreeViewItem"> <Setter Property="KeyboardNavigation.TabNavigation" Value="Continue" /> </Style> </Window.Resources> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <TextBox Text="First Line" Grid.Row="0" /> <TreeView Grid.Row="1" KeyboardNavigation.TabNavigation="Continue" IsTabStop="False"> <TreeViewItem IsExpanded="True"><TreeViewItem.Header><TextBox Text="Popular Words"/></TreeViewItem.Header> <TreeViewItem><TreeViewItem.Header><TextBox Text="Foo"/></TreeViewItem.Header></TreeViewItem> <TreeViewItem><TreeViewItem.Header><TextBox Text="Bar"/></TreeViewItem.Header></TreeViewItem> <TreeViewItem><TreeViewItem.Header><TextBox Text="Hello"/></TreeViewItem.Header></TreeViewItem> </TreeViewItem> <TreeViewItem IsExpanded="True"><TreeViewItem.Header><TextBox Text="Unpopular Words"/></TreeViewItem.Header> <TreeViewItem><TreeViewItem.Header><TextBox Text="Work"/></TreeViewItem.Header></TreeViewItem> <TreeViewItem><TreeViewItem.Header><TextBox Text="Duplication"/></TreeViewItem.Header></TreeViewItem> </TreeViewItem> </TreeView> <TextBox Text="Last Line" Grid.Row="2" /> </Grid>

    Read the article

  • Android - Start service on boot

    - by Gady
    From everything I've seen on Stack Exchange and elsewhere, I have everything set up correctly to start an IntentService when Android OS boots. Unfortunately it is not starting on boot, and I'm not getting any errors. Maybe the experts can help... Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.phx.batterylogger" android:versionCode="1" android:versionName="1.0" android:installLocation="internalOnly"> <uses-sdk android:minSdkVersion="8" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.BATTERY_STATS" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <service android:name=".BatteryLogger"/> <receiver android:name=".StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> </manifest> BroadcastReceiver for Startup: package com.phx.batterylogger; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class StartupIntentReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Intent serviceIntent = new Intent(context, BatteryLogger.class); context.startService(serviceIntent); } } UPDATE: I tried just about all of the suggestions below, and I added logging such as Log.v("BatteryLogger", "Got to onReceive, about to start service"); to the onReceive handler of the StartupIntentReceiver, and nothing is ever logged. So it isn't even making it to the BroadcastReceiver. I think I'm deploying the APK and testing correctly, just running Debug in Eclipse and the console says it successfully installs it to my Xoom tablet at \BatteryLogger\bin\BatteryLogger.apk. Then to test, I reboot the tablet and then look at the logs in DDMS and check the Running Services in the OS settings. Does this all sound correct, or am I missing something? Again, any help is much appreciated.

    Read the article

  • Multiple Connection Types for one Designer Generated TableAdapter

    - by Tim
    I have a Windows Forms application with a DataSet (.xsd) that is currently set to connect to a Sql Ce database. Compact Edition is being used so the users can use this application in the field without an internet connection, and then sync their data at day's end. I have been given a new project to create a supplemental web interface for displaying some of the same reports as the Windows Forms application so certain users can obtain reports without installing the Windows app. What I've done so far is create a new Web Project and added it to my current Solution. I have split both the reports (.rdlc) and DataSets out of the Windows Forms project into their own projects so they can be accessed by both the Windows and Web applications. So far, this is working fine. Here's my dilemma: As I said before, the DataSets are currently set up to connect to a local Sql Ce database file. This is correct for the Windows app, but for the Web application I would like to use these same TableAdapters and queries to connect to the Sql Server 2005 database. I have found that the designer generated, strongly-typed TableAdapter classes have a ConnectionModifier property that allows you to make the TableAdapter's Connection public. This exposes the Connection property and allows me to set it, however it is strongly-typed as a SqlCeConnection, whereas I would like to set it to a SqlConnection for my Web project. I'm assuming the DataSet Designer strongly-types the Connection, Command, and DataAdapter objects based on the Provider of the ConnectionString as indicated in the app.config file. Is there any way I can use some generic provider so that the DataSet Designer will use object types that can connect to both a Sql Ce database file AND the actual Sql Server 2005 database? I know that SqlCeConnection and SqlConnection both inherit from DbConnection, which implements IDbConnection. Relatively, the same goes for SqlCeCommand/SqlCommand:DbCommand:IDbCommand. It would be nice if I could just figure out a way for the designer to use the Interface types rather than the strong types, but I'm hesitant that that is possible. I hope my problem and question are clear. Any help is much appreciated. Let me know if there's anything I can clarify.

    Read the article

  • Finite State Machine Spellchecker

    - by Durell
    I would love to have a debugged copy of the finite state machine code below. I tried debugging but could not, all the machine has to do is to spell check the word "and",an equivalent program using case is welcomed. #include<cstdlib> #include<stdio.h> #include<string.h> #include<iostream> #include<string> using namespace std; char in_str; int n; void spell_check() { char data[256]; int i; FILE *in_file; in_file=fopen("C:\\Users\\mytorinna\\Desktop\\a.txt","r+"); while (!feof(in_file)) { for(i=0;i<256;i++) { fscanf(in_file,"%c",in_str); data[i]=in_str; } //n = strlen(in_str); //start(data); cout<<data; } } void start(char data) { // char next_char; //int i = 0; // for(i=0;i<256;i++) // if (n == 0) { if(data[i]="a") { state_A(); exit; } else { cout<<"I am comming"; } // cout<<"This is an empty string"; // exit();//do something here to terminate the program } } void state_A(int i) { if(in_str[i] == 'n') { i++; if(i<n) state_AN(i); else error(); } else error(); } void state_AN(int i) { if(in_str[i] == 'd') { if(i == n-1) cout<<" Your keyword spelling is correct"; else cout<<"Wrong keyword spelling"; } } int main() { spell_check(); system("pause"); return 0; }

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • Jquery Galleryview

    - by kwek-kwek
    I am working on a site that requires a jquery Galleryview now I have everything set up but my images are not showing. here is my header code: <script type="text/javascript" src="=/js/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/js/jquery.easing.1.3.js"></script> <script type="text/javascript" src="/js/jquery.timers-1.1.2.js"></script> <script type="text/javascript" src="js/jquery.galleryview-1.1.js"></script> <script> $('#photos').galleryView({ panel_width: 506, panel_height: 275, transition_speed: 1500, transition_interval: 5000, nav_theme: 'dark', border: '1px solid white', pause_on_hover: true }); </script> My gallery code: <div id="photos" class="galleryview"> <div class="panel"> <img src="<?php bloginfo('template_url'); ?>/images/condo-one/1.jpg" alt="image1" /> </div> <div class="panel"> <img src="<?php bloginfo('template_url'); ?>/images/condo-one/2.jpg" alt="image1" /> </div> <div class="panel"> <img src="<?php bloginfo('template_url'); ?>/images/condo-one/3.jpg" alt="image1" /> </div> <div class="panel"> <img src="<?php bloginfo('template_url'); ?>/images/condo-one/4.jpg" /> </div> <div class="panel"> <img src="<?php bloginfo('template_url'); ?>/images/condo-one/5.jpg" /> </div> </div> The problem is that the images are not showing but it's the correct path view the site demo here please help.

    Read the article

  • SurfaceView for Camera Preview won't get destroyed when pressing Power-Botton

    - by for3st
    I want to implement a camera preview. For that I have a custom View CameraView extends ViewGroup that in the constructor programatically creates an surfaceView. I have the following components (higly simplified for beverity): ScannerFragment.java public View onCreateView(..) { //inflate view and get cameraView } public void onResume() { //open camera -> set rotation -> startPreview (in a thread) -> //set preview callback -> start decoding worker } public void onPause() { // stop decoding worker -> stop Preview -> release camera } CameraView.java extends ViewGroup public void setUpCalledInConstructor(Context context) { //create a surfaceview and add it to this viewgroup -> //get SurfaceHolder and set callback } /* SurfaceHolder.Callback */ public void surfaceCreated(SurfaceHolder holder) { camera.setPreviewDisplay(holder); } public void surfaceDestroyed(SurfaceHolder holder) { //NOTHING is done here } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { camera.getParameters().setPreviewSize(previewSize.width, previewSize.height); } fragment_scanner.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <com.myapp.camera.CameraView android:id="@+id/cameraPreview" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout> I think I have set the lifecycle correct (getting resources onResume(), releasing it onPause() roughly said) and the following works just fine: pressing home and returning pressing Taskswitcher and returning rotation But one thing doesn't work and that is when I press the power-button on the device and then return to the camera-preview. The result is: the preview is stuck with the image that was last captured before button was pressed. If I rotate it works fine again, since it will get through the lifecycle. After some research I found out that this is probably due to the fact that surfaceView won't get destroyed when the power-button is pressed, i.e. SurfaceHolder.Callback.surfaceDestroyed(SurfaceHolder holder) won't be called. And in fact when I compare the (very verbose) log output of the home-button-case and the power-button-case it's the same except that 'surfaceDestroyed' won't get called. So far I found no solution whatsoever to work around it. I purposely avoid any resource cleaning code in my surfaceDestroyed(), but this does not help. My idea was to manually destroy the surfaceView like asked in this question but this seems not possible. I also tested other applications with surfaceViews/cameras and they don't seem to have this issue. So I would appreciate any hints or tips on that.

    Read the article

  • Buy or Build for web deployment?

    - by Cannonade
    I have been evaluating the wide range of installation and web deployment solutions available for Windows applications. I will just clarify here (without too much detail, these tools have been covered in other questions) my understanding of the options: NSIS - Free tool that generates setup executables. Small binary. Specialized, sometimes obtuse, scripting language. Inno Setup - Free tools for setup executables. Various binary compression schemes. Pascal scripting engine. WIX - Free toolset to generate MSI binaries. XML definitions language. WIX ClickThrough - Additional tools for packaging, web download and auto update detection (now part of WIX core). InstallShield - Commercial development environment for installation packaging. Generates MSI binaries. C-like InstallScript language. Wise - Commercial development environment for installation packaging. Generates MSI binaries. ClickOnce - Visual Studio supported framework for publishing applications to a webserver, with automatic detection of updates. No support for custom installation requirements (INI files, registry etc ...). Packages setup as an MSI binary. Install Aware - Commercial development environment for installation. Generates MSI binaries. Automatic Update framwork (Web Update). If I have missed any, please let me know. And found some useful discussions of these technologies on StackOverflow: Best Simple Install System Best choice for Windows installers Alternatives to ClickOnce I have worked with a few of these solutions, as well as a handful of proprietary internal installation solutions. They are mostly concerned with packing installations and providing a framework for developers to access the run time environment. With the growing requirement for web deployment and automatic software updates, I expected to find more of a consensus among developers on a framework for web delivery of software and subsequent updates, I haven't really found that consensus. There are certainly solutions available (ClickOnce, ClickThrough, InstallShield Update Service), but they each have considerable limitations (please correct me if I mis-represent any of these). I would be interested in a framework that provided some of the following: Third party hosting/management of updates. Access to client environment (INI files, registry, etc..). User registration/activation. Feedback/Error reporting This is leaving me with the strong impression that the best way to approach the web deployment problem is through a custom built proprietary solution (possibly leveraging existing installer packaging). I have seen this sort of solution work well for a number of successful applications: FileZilla - HTTP request to update.filezilla-project.org to check for updates, downloads an NSIS binary (I think) and then shuts down to run the install.

    Read the article

  • log4j performance

    - by Bob
    Hi, I'm developing a web app, and I'd like to log some information to help me improve and observe the app. (I'm using Tomcat6) First I thought I would use StringBuilders, append the logs to them and a task would persist them into the database like every 2 minutes. Because I was worried about the out-of-the-box logging system's performance. Then I made some test. Especially with log4j. Here is my code: Main.java public static void main(String[] args) { Thread[] threads = new Thread[LoggerThread.threadsNumber]; for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i] = new Thread(new LoggerThread("name - " + i)); } LoggerThread.startTimestamp = System.currentTimeMillis(); for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i].start(); } LoggerThread.java public class LoggerThread implements Runnable{ public static int threadsNumber = 10; public static long startTimestamp; private static int counter = 0; private String name; public LoggerThread(String name) { this.name = name; } private Logger log = Logger.getLogger(this.getClass()); @Override public void run() { for(int i=0; i<10000; ++i){ log.info(name + ": " + i); if(i == 9999){ int c = increaseCounter(); if(c == threadsNumber){ System.out.println("Elapsed time: " + (System.currentTimeMillis() - startTimestamp)); } } } } private synchronized int increaseCounter(){ return ++counter; } } } log4j.properties log4j.logger.main.LoggerThread=debug, f log4j.appender.f=org.apache.log4j.RollingFileAppender log4j.appender.f.layout=org.apache.log4j.PatternLayout log4j.appender.f.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.f.File=c:/logs/logging.log log4j.appender.f.MaxFileSize=15000KB log4j.appender.f.MaxBackupIndex=50 I think this is a very common configuration for log4j. First I used log4j 1.2.14 then I realized there was a newer version, so I switched to 1.2.16 Here are the figures (all in millisec) LoggerThread.threadsNumber = 10 1.2.14: 4235, 4267, 4328, 4282 1.2.16: 2780, 2781, 2797, 2781 LoggerThread.threadsNumber = 100 1.2.14: 41312, 41014, 42251 1.2.16: 25606, 25729, 25922 I think this is very fast. Don't forget that: in every cycle the run method not just log into the file, it has to concatenate strings (name + ": " + i), and check an if test (i == 9999). When threadsNumber is 10, there are 100.000 loggings and if tests and concatenations. When it is 100, there are 1.000.000 loggings and if tests and concatenations. (I've read somewhere JVM uses StringBuilder's append for concatenation, not simple concatenation). Did I missed something? Am I doing something wrong? Did I forget any factor that could decrease the performance? If these figures are correct I think, I don't have to worry about log4j's performance even if I heavily log, do I?

    Read the article

  • Can parser combination be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and shift oop f op (P(g, xs)) = let h, xs = loop (op, g, xs) loop (oop, f h, xs) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) | oop, f, ('^' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) | '^' -> pown loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '-'::P(f, xs) -> let f, xs = loop ('~', f, xs) P(-f, xs) | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • How to have member variables and public methods in a jQuery plugin?

    - by user169867
    I'm trying to create a jQuery plugin that will create something like an autoCompleteBox but with custom features. How do I store member variables for each matching jQuery element? For example I'll need to store a timerID for each. I'd also like to store references to some of the DOM elements that make up the control. I'd like to be able to make a public method that works something like: $("#myCtrl").autoCompleteEx.addItem("1"); But in the implementation of addItem() how can I access the member variables for that particular object like its timerID or whatever? Below is what I have so far... Thanks for any help or suggestions! (function($) { //Attach this new method to jQuery $.fn.autoCompleteEx = function(options) { //Merge Given Options W/ Defaults, But Don't Alter Either var opts = $.extend({}, $.fn.autoCompleteEx.defaults, options); //Iterate over the current set of matched elements return this.each(function() { var acx = $(this); //Get JQuery Version Of Element (Should Be Div) //Give Div Correct Class & Add <ul> w/ input item to it acx.addClass("autoCompleteEx"); acx.html("<ul><li class=\"input\"><input type=\"text\"/></li></ul>"); //Grab Input As JQ Object var input = $("input", acx); //Wireup Div acx.click(function() { input.focus().val( input.val() ); }); //Wireup Input input.keydown(function(e) { var kc = e.keyCode; if(kc == 13) //Enter { } else if(kc == 27) //Esc { } else { //Resize TextArea To Input var width = 50 + (_txtArea.val().length*10); _txtArea.css("width", width+"px"); } }); }); //End Each JQ Element }; //End autoCompleteEx() //Private Functions function junk() { }; //Public Functions $.fn.autoCompleteEx.addItem = function(id,txt) { var x = this; var y = 0; }; //Default Settings $.fn.autoCompleteEx.defaults = { minChars: 2, delay: 300, maxItems: 1 }; //End Of Closure })(jQuery);

    Read the article

  • Flip clock showing time issue (animations invovled)

    - by Hwang
    I'm creating a flip clock (clock where you always see in airport), but I can't seems to get the time to show at the correct timing. After the flip, then will see the number changing. But I want it to change before it flips. Currently I'm not sure whether is the animation problem, or isit I have to do something else on the script. I've uploaded the FLA so that you could have a look on how I set up the flipping animation. http://www.mediafire.com/?nzmymjgtntz Below is the AS3 code: package { import flash.display.MovieClip; import flash.events.TimerEvent; import flash.utils.Timer; public class flipClock extends MovieClip { private var clock:clockMC=new clockMC(); //seconds private var secTop1=clock.second.top1.digit; private var secTop2=clock.second.top2.digit; private var secBot1=clock.second.bot1.digit; private var secBot2=clock.second.bot2.digit; private var seconds:Number; private var minutes:Number; private var hours:Number; private var days:Number; public function flipClock():void { decrease(); addChild(clock); } private function decrease():void { var countdownTimer:Timer=new Timer(1000); //Adding an event listener to the timer object countdownTimer.addEventListener(TimerEvent.TIMER,updateTime); //Initializing timer object countdownTimer.start(); } private function updateTime(event:TimerEvent):void { decreasTimerFunction(); updateSecond(); //updateMinute(); } private function updateSecond():void { clock.second.play(); secTop1.text=num2; secTop2.text=num1; if (num1<10) { num1="0"+num1; } if (num2<10) { num2="0"+num2; } if (num1==60) { num1=0; } if (num2==60) { num2=0; } secTop1.text=num1; secTop2.text=num2; //secBot1.text=num1; //secBot2.text=num2; } private function decreasTimerFunction():void { //Create a date object for Christmas Morning var endTime:Date=new Date(2010,4,26,0,0,0,0); //Current date object var now:Date=new Date(); // Set the difference between the two date and times in milliseconds var timeDiff:Number=endTime.getTime()-now.getTime(); seconds=Math.floor(timeDiff/1000); minutes=Math.floor(seconds/60); hours=Math.floor(minutes/60); days=Math.floor(hours/24); // Set the remainder of the division vars above hours%=24; minutes%=60; seconds%=60; } } }

    Read the article

  • subclassing QList and operator+ overloading

    - by Milen
    I would like to be able to add two QList objects. For example: QList<int> b; b.append(10); b.append(20); b.append(30); QList<int> c; c.append(1); c.append(2); c.append(3); QList<int> d; d = b + c; For this reason, I decided to subclass the QList and to overload the operator+. Here is my code: class List : public QList<int> { public: List() : QList<int>() {} // Add QList + QList friend List operator+(const List& a1, const List& a2); }; List operator+(const List& a1, const List& a2) { List myList; myList.append(a1[0] + a2[0]); myList.append(a1[1] + a2[1]); myList.append(a1[2] + a2[2]); return myList; } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); List b; b.append(10); b.append(20); b.append(30); List c; c.append(1); c.append(2); c.append(3); List d; d = b + c; List::iterator i; for(i = d.begin(); i != d.end(); ++i) qDebug() << *i; return a.exec(); } , the result is correct but I am not sure whether this is a good approach. I would like to ask whether there is better solution?

    Read the article

  • JNI - GetObjectField returns NULL

    - by Daniel
    I'm currently working on Mangler's Android implementation. I have a java class that looks like so: public class VentriloEventData { public short type; public class _pcm { public int length; public short send_type; public int rate; public byte channels; }; _pcm pcm; } The signature for my pcm object: $ javap -s -p VentriloEventData ... org.mangler.VentriloEventData$_pcm pcm; Signature: Lorg/mangler/VentriloEventData$_pcm; I am implementing a native JNI function called getevent, which will write to the fields in an instance of the VentriloEventData class. For what it's worth, it's defined and called in Java like so: public static native int getevent(VentriloEventData data); VentriloEventData data = new VentriloEventData(); getevent(data); And my JNI implementation of getevent: JNIEXPORT jint JNICALL Java_org_mangler_VentriloInterface_getevent(JNIEnv* env, jobject obj, jobject eventdata) { v3_event *ev = v3_get_event(V3_BLOCK); if(ev != NULL) { jclass event_class = (*env)->GetObjectClass(env, eventdata); // Event type. jfieldID type_field = (*env)->GetFieldID(env, event_class, "type", "S"); (*env)->SetShortField( env, eventdata, type_field, 1234 ); // Get PCM class. jfieldID pcm_field = (*env)->GetFieldID(env, event_class, "pcm", "Lorg/mangler/VentriloEventData$_pcm;"); jobject pcm = (*env)->GetObjectField( env, eventdata, pcm_field ); jclass pcm_class = (*env)->GetObjectClass(env, pcm); // Set PCM fields. jfieldID pcm_length_field = (*env)->GetFieldID(env, pcm_class, "length", "I"); (*env)->SetIntField( env, pcm, pcm_length_field, 1337 ); free(ev); } return 0; } The code above works fine for writing into the type field (that is not wrapped by the _pcm class). Once getevent is called, data.type is verified to be 1234 at the java side :) My problem is that the assertion "pcm != NULL" will fail. Note that pcm_field != NULL, which probably indicates that the signature to that field is correct... so there must be something wrong with my call to GetObjectField. It looks fine though if I compare it to the official JNI docs. Been bashing my head on this problem for the past 2 hours and I'm getting a little desperate.. hoping a different perspective will help me out on this one.

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >