Search Results

Search found 9182 results on 368 pages for 'import hooks'.

Page 326/368 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Reflecting over classes in .NET produces methods only differing by a modifier

    - by mrjoltcola
    I'm a bit boggled by something, I hope the CLR gearheads can help. Apparently my gears aren't big enough. I have a reflector utility that generates assembly stubs for Cola for .NET, and I find classes have methods that only differ by a modifier, such as virtual. Example below, from Oracle.DataAccess.dll, method GetType(): class OracleTypeException : System.SystemException { virtual string ToString (); virtual System.Exception GetBaseException (); virtual void set_Source (string value); virtual void GetObjectData (System.Runtime.Serialization.SerializationInfo info, System.Runtime.Serialization.StreamingContext context); virtual System.Type GetType (); // here virtual bool Equals (object obj); virtual int32 GetHashCode (); System.Type GetType (); // and here } What is this? I have not been able to reproduce this with C# and it causes trouble for Cola as it thinks GetType() is a redefinition, since the signature is identical. My method reflector starts like this: static void DisplayMethod(MethodInfo m) { if ( // Filter out things Cola cannot yet import, like generics, pointers, etc. m.IsGenericMethodDefinition || m.ContainsGenericParameters || m.ReturnType.IsGenericType || !m.ReturnType.IsPublic || m.ReturnType.IsArray || m.ReturnType.IsPointer || m.ReturnType.IsByRef || m.ReturnType.IsPointer || m.ReturnType.IsMarshalByRef || m.ReturnType.IsImport ) return; // generate stub signature // [snipped] }

    Read the article

  • Using XPath on String in Android (JAVA)

    - by Rav
    I am looking for some examples of using xpath in Android? Or if anyone can share their experiences. I have been struggeling to make tail or head of this problem :-( I have a string that contains a standard xml file. I believe I need to convert that into an xml document. I have found this code which I think will do the trick: public static Document stringToDom(String xmlSource) throws SAXException, ParserConfigurationException, IOException { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); return builder.parse(new InputSource(new StringReader(xmlSource))); } Next steps Assuming the code above is OK, I need to apply xpath to get values from cat: "/animal/mammal/feline/cat" I look at the dev doc here: http://developer.android.com/reference/javax/xml/xpath/XPath.html and also look online, but I am not sure where to start! I have tried to use the following code: XPathFactory xPathFactory = XPathFactory.newInstance(); // To get an instance of the XPathFactory object itself. XPath xPath = xPathFactory.newXPath(); // Create an instance of XPath from the factory class. String expression = "SomeXPathExpression"; XPathExpression xPathExpression = xPath.compile(expression); // Compile the expression to get a XPathExpression object. Object result = xPathExpression.evaluate(xmlDocument); // Evaluate the expression against the XML Document to get the result. But I get "Cannot be resolved". Eclipse doesn't seem to be able to fix this import. I tried manually entering: javax.xml.xpath.XPath But this did not work. Does anyone know any good source code that I can utilise, for Android platform? 1.5

    Read the article

  • Making pygtksourceview work in windows

    - by Dani
    So, I'm trying to get gtksourceview python bindings work under windows (I'm developing a cross platform gtk application that shows code, so gtksourceview seemed like a natural choice). I have pygtk installed and working (I followed the instructions in http://www.pygtk.org/downloads.html) I tried the instructions in http://projects.gnome.org/gtksourceview/ for gtksourceview. Here is what I did: Downloaded and extracted the latest gtksourceview window binaries from: http://ftp.gnome.org/pub/gnome/binaries/win32/gtksourceview/2.10/gtksourceview-2.10.0.zip The website said gtksourceview needs libxml, so I downloaded and extracted the latest libxml window binaries from: http://xmlsoft.org/sources/win32/libxml2-2.7.6.win32.zip Added the folders containing dll files to the PATH (in my computer they were c:\opt\gtksourceview\bin; C:\opt\libxml2-2.7.6.win32\bin) Installed pygtksourceview with the windows installer: http://ftp.gnome.org/pub/gnome/binaries/win32/pygtksourceview/2.10/pygtksourceview-2.10.0.win32-py2.6.exe Renamed the file libxml2.dll to libxml2-2.dll (after running depends on the gtksourceview dll) Now, the gtksouceview widget seems to work, until I'm trying to set the code's language. When I do that python crashes. Here is how I crash it in the console (the simplest way i could come up with): >>>import gtksourceview2 >>>lang = gtksourceview2.language_manager_get_default().get_language('cpp') >>>lang.get_style_ids() I'm hoping I'm not the first person to use gtksourceview in python on windows. Any ideas what I should try?

    Read the article

  • Loading Native library to external Package in Eclipse not working. is it a Bug?

    - by TacB0sS
    I was about to report a but to Eclipse, but I was thinking to give this a chance here first: If I add an external package, the application cannot find the referenced native library, except in the case specified at the below: If my workspace consists of a single project, and I import an external package 'EX_package.jar' from a folder outside of the project folder, I can assign a folder to the native library location via: mouse over package - right click - properties - Native Library - Enter your folder. This does not work. In runtime the application does not load the library, System.mapLibraryName(Path) also does not work. Further more, if I create a User Library, and add the package to it and define a folder for the native library it still does not. If it works for you then I have a major bug since it does not work on my computer I test this in any combination I could think of, including adding the path to the windows PATH parameter, and so many other ways I can't even start to remember, nothing worked, I played with this for hours and had a colleague try to assist me, but we both came up empty. Further more, if I have a main project that is dependent on few other projects in my workspace, and they all need to use the same 'EX_package.jar' I MUST supply a HARD COPY INTO EACH OF THEM, it will ONLY (I can't stress the ONLYNESS, I got freaked out by this) work if I have a hard copy of the package in ALL of the project folders that the main project has a dependency on, and ONLY if I configure the Native path in each of them!! This also didn't do the trick. please tell me there is a solution to this, this drives me nuts... Thanks, Adam Zehavi.

    Read the article

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • How to test for existence of a script-scoped variable in PowerShell?

    - by Damian Powell
    Is it possible to test for the existence of a script-scoped variable in PowerShell? I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced: The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set. At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24 While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1: #requires -version 2.0 param([string[]]$PreCacheList) if ((!$SCRIPT:helpCache) -or $RefreshCache) { $SCRIPT:helpCache = @{} } This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet. Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable. So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?

    Read the article

  • heterogeneous comparisons in python3

    - by Matt Anderson
    I'm 99+% still using python 2.x, but I'm trying to think ahead to the day when I switch. So, I know that using comparison operators (less/greater than, or equal to) on heterogeneous types that don't have a natural ordering is no longer supported in python3.x -- instead of some consistent (but arbitrary) result we raise TypeError instead. I see the logic in that, and even mostly think its a good thing. Consistency and refusing to guess is a virtue. But what if you essentially want the python2.x behavior? What's the best way to go about getting it? For fun (more or less) I was recently implementing a Skip List, a data structure that keeps its elements sorted. I wanted to use heterogeneous types as keys in the data structure, and I've got to compare keys to one another as I walk the data structure. The python2.x way of comparing makes this really convenient -- you get an understandable ordering amongst elements that have a natural ordering, and some ordering amongst those that don't. Consistently using a sort/comparison key like (type(obj).__name__, obj) has the disadvantage of not interleaving the objects that do have a natural ordering; you get all your floats clustered together before your ints, and your str-derived class separates from your strs. I came up with the following: import operator def hetero_sort_key(obj): cls = type(obj) return (cls.__name__+'_'+cls.__module__, obj) def make_hetero_comparitor(fn): def comparator(a, b): try: return fn(a, b) except TypeError: return fn(hetero_sort_key(a), hetero_sort_key(b)) return comparator hetero_lt = make_hetero_comparitor(operator.lt) hetero_gt = make_hetero_comparitor(operator.gt) hetero_le = make_hetero_comparitor(operator.le) hetero_ge = make_hetero_comparitor(operator.gt) Is there a better way? I suspect one could construct a corner case that this would screw up -- a situation where you can compare type A to B and type A to C, but where B and C raise TypeError when compared, and you can end up with something illogical like a > b, a < c, and yet b > c (because of how their class names sorted). I don't know how likely it is that you'd run into this in practice.

    Read the article

  • Comparing dicts and update a list of result

    - by lmnt
    Hello, I have a list of dicts and I want to compare each dict in that list with a dict in a resulting list, add it to the result list if it's not there, and if it's there, update a counter associated with that dict. At first I wanted to use the solution described at http://stackoverflow.com/questions/1692388/python-list-of-dict-if-exists-increment-a-dict-value-if-not-append-a-new-dict but I got an error where one dict can not be used as a key to another dict. So the data structure I opted for is a list where each entry is a dict and an int: r = [[{'src': '', 'dst': '', 'cmd': ''}, 0]] The original dataset (that should be compared to the resulting dataset) is a list of dicts: d1 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} d2 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'} d3 = {'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'} d4 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} o = [d1, d2, d3, d4] The result should be: r = [[{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'}, 2], [{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'}, 1], [{'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'}, 1]] What is the best way to accomplish this? I have a few code examples but none is really good and most is not working correctly. Thanks for any input on this! UPDATE The final code after Tamås comments is: from collections import namedtuple, defaultdict DataClass = namedtuple("DataClass", "src dst cmd") d1 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') d2 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd2') d3 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1') d4 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') ds = d1, d2, d3, d4 r = defaultdict(int) for d in ds: r[d] += 1 print "list to compare" for d in ds: print d print "result after merge" for k, v in r.iteritems(): print("%s: %s" % (k, v))

    Read the article

  • Slow Python HTTP server on localhost

    - by Abiel
    I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect. Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7. Below is my very simple server example, which simply returns 'Hello world' for any get request. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class MyHandler(BaseHTTPRequestHandler): def do_GET(self): print("Just received a GET request") self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write('Hello world') return def log_request(self, code=None, size=None): print('Request') def log_message(self, format, *args): print('Message') if __name__ == "__main__": try: server = HTTPServer(('localhost', 80), MyHandler) print('Started http server') server.serve_forever() except KeyboardInterrupt: print('^C received, shutting down server') server.socket.close() Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem. Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.

    Read the article

  • Type or namespace could not be found

    - by Jason Shultz
    I'm using LINQ to SQL to connect my database to my home page. I created my datacontext (named businessModel.dbml) In it I have two tables named Category and Business. In home controller I reference the model and attempt to return to the view the table: var dataContext = new businessModelDataContext(); var business = from b in dataContext.Businesses select b; ViewData["WelcomeMessage"] = "Welcome to Jerome, Arizona!"; ViewData["MottoMessage"] = "Largest Ghost Town in America!"; return View(business); and in the view I have this: <%@ Import Namespace="WelcomeToJerome.Models" %> and <% foreach (business b in (IEnumerable)ViewData.Model) { %> <li><%= b.Title %></li> <% } %> Yet, in the view business is cursed with the red underline and say's that The type or namespace name 'business' could not be found (are you missing a using directive or an assembly reference?) What am I doing wrong? This has had me stumped all afternoon. link to all the code in pastebin: http://pastebin.com/es4RnS2q

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • Windows cmd encoding change causes Python crash.

    - by Alex
    First I chage Windows CMD encoding to utf-8 and run Python interpreter: chcp 65001 python Then I try to print a unicode sting inside it and when i do this Python crashes in a peculiar way (I just get a cmd prompt in the same window). >>> import sys >>> print u'ëèæîð'.encode(sys.stdin.encoding) Any ideas why it happens and how to make it work? UPD: sys.stdin.encoding returns 'cp65001' UPD2: It just came to me that the issue might be connected with the fact that utf-8 uses multi-byte character set (kcwu made a good point on that). I tried running the whole example with 'windows-1250' and got 'ëeaî?'. Windows-1250 uses single-character set so it worked for those characters it understands. However I still have no idea how to make 'utf-8' work here. UPD3: Oh, I found out it is a known Python bug. I guess what happens is that Python copies the cmd encoding as 'cp65001 to sys.stdin.encoding and tries to apply it to all the input. Since it fails to understand 'cp65001' it crushes on any input that contains non-ascii characters.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • Generating SQL change scripts in SSMS 2008

    - by Munish Goyal
    I have gone through many related SO threads and got some basic info. Already generated DB diagram. After that i am unable to find a button/option to generate SQL scripts (create) for all the tables in diagram. "Generate script" button is disabled, even on clicking the table in diagram. However i enabled the auto-generate option in tools-designer. But what to do with previous diagrams. I just want an easy way to auto-generate such scripts (create/alter) and would be gud if i get auto-generated stored procs for insert/selects/update etc. EDIT: I could do generate scripts for DB objects. Now: 1. How to import DB diagram from another DB. 2. How to generate (and manage their change integrated with VS source control) routine stored-procs like insert, update and select. Ok let me ask another way, can experts guide on the usual flow of creating/altering tables (across releases), creating stored-procs (are stored-procs the best way to go ?) and their change-management using SSMS design tools and minimal effort ?

    Read the article

  • How to compare two variables from a java class in jess and execute a rule?

    - by user3417084
    I'm beginner in Jess. I'm trying to compare two variables from a Java class in Jess and trying to execute a rule. I have imported cTNumber and measuredCurrent (both are integer)form a java class called CurrentSignal. Similarly imported vTNumberand measuredVoltage form a java class DERSignal. Now I want to make a rule such that if cTNumber is equal to vTNumber then multiply measuredCurrent and measuredVoltage (Both are double) for calculating power. I'm trying in this way.... (import signals.*) (deftemplate CurrentSignal (declare (from-class CurrentSignal))) (deftemplate DERSignal (declare (from-class DERSignal))) (defglobal ?*CTnumber* = 0) (defglobal ?*VTnumber* = 0) (defglobal ?*VTnumberDER* = 0) (defglobal ?*measuredCurrent* = 0) (defglobal ?*measuredVoltage* = 0) (defglobal ?*measuredVoltageDER* = 0) (defrule Get-CT-Number (CurrentSignal (cTNumber ?m)) (CurrentSignal (measuredCurrent ?c)) => (bind ?*measuredCurrent* ?c) (printout t "Measured Current : " ?*measuredCurrent*" Amps"crlf) (bind ?*CTnumber* ?m) (printout t ?*CTnumber* crlf) ) (defrule Get-DER-Number (DERSignal (vTNumber ?o)) (DERSignal (measuredVoltage ?V)) => (bind ?*measuredVoltageDER* ?V) (printout t "Measured Voltage : " ?*measuredVoltageDER* " V" crlf) (bind ?*VTnumberDER* ?o) (printout t ?*VTnumberDER* crlf) ) (defrule Power-Calculation-DER-signal "Power calculation of DER Bay" (test (= ?*CTnumber* ?*VTnumberDER* )) => (printout t "Total Generation : " (* ?*measuredCurrent* ?*measuredVoltageDER*) crlf) ) But the Total Generation is showing 0. But I tried calculating in Java and it's showing a number. Can anyone please help me to solve this problem. Thank you.

    Read the article

  • wxpython: adding panel to wx.Frame disables/conflicts with wx.Frame's OnPaint?!

    - by sdaau
    Hi all, I just encountered this strange situation: I found an example, where wx.Frame's OnPaint is overridden, and a circle is drawn. Funnily, as soon as I add even a single panel to the frame, the circle is not drawn anymore - in fact, OnPaint is not called at all anymore ! Can anyone explain me if this is the expected behavior, and how to correctly handle a wx.Frame's OnPaint, if the wx.Frame has child panels ? Small code example is below.. Thanks in advance for any answers, Cheers! The code: #!/usr/bin/env python # http://www.linuxquestions.org/questions/programming-9/wxwidgets-wxpython-drawing-problems-with-onpaint-event-703946/ import wx class MainWindow(wx.Frame): def __init__(self, parent, title, size=wx.DefaultSize): wx.Frame.__init__(self, parent, wx.ID_ANY, title, wx.DefaultPosition, size) self.circles = list() self.displaceX = 30 self.displaceY = 30 circlePos = (self.displaceX, self.displaceY) self.circles.append(circlePos) ## uncommenting either only first, or both of ## the commands below, causes OnPaint *not* to be called anymore! #~ self.panel = wx.Panel(self, wx.ID_ANY) #~ self.mpanelA = wx.Panel(self.panel, -1, size=(200,50)) self.Bind(wx.EVT_PAINT, self.OnPaint) def OnPaint(self, e): print "OnPaint called" dc = wx.PaintDC(self) dc.SetPen(wx.Pen(wx.BLUE)) dc.SetBrush(wx.Brush(wx.BLUE)) # Go through the list of circles to draw all of them for circle in self.circles: dc.DrawCircle(circle[0], circle[1], 10) def main(): app = wx.App() win = MainWindow(None, "Draw delayed circles", size=(620,460)) win.Show() app.MainLoop() if __name__ == "__main__": main()

    Read the article

  • preparing SMS app for Android KitKat

    - by Pmsc
    in agreement with the recent post from Android Developers http://android-developers.blogspot.pt/2013/10/getting-your-sms-apps-ready-for-kitkat.html ,I was trying to prepare my app to the new android version, but encountered a problem with the part they suggest to create a dialog to let the user set the app as the default application to handle SMS's : Android Developers Post public class ComposeSmsActivity extends Activity { @Override protected void onResume() { super.onResume(); final String myPackageName = getPackageName(); if (!Telephony.Sms.getDefaultSmsPackage(this).equals(myPackageName)) { // App is not default. // Show the "not currently set as the default SMS app" interface View viewGroup = findViewById(R.id.not_default_app); viewGroup.setVisibility(View.VISIBLE); // Set up a button that allows the user to change the default SMS app Button button = (Button) findViewById(R.id.change_default_app); button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Intent intent = new Intent(Telephony.Sms.Intents.ACTION_CHANGE_DEFAULT); intent.putExtra(Telephony.Sms.Intents.EXTRA_PACKAGE_NAME, myPackageName); startActivity(intent); } }); } else { // App is the default. // Hide the "not currently set as the default SMS app" interface View viewGroup = findViewById(R.id.not_default_app); viewGroup.setVisibility(View.GONE); } } } the code itself in pretty much straightforward, but I'm unable to access to Telephony.Sms.getDefaultSmsPackage because it says that Telephony cannot be resolved, and I can't find any import or declaration that would fix that. Can anyone please help? Thanks in advanced.

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • strange chi-square result using scikit_learn with feature matrix

    - by user963386
    I am using scikit learn to calculate the basic chi-square statistics(sklearn.feature_selection.chi2(X, y)): def chi_square(feat,target): """ """ from sklearn.feature_selection import chi2 ch,pval = chi2(feat,target) return ch,pval chisq,p = chi_square(feat_mat,target_sc) print(chisq) print("**********************") print(p) I have 1500 samples,45 features,4 classes. The input is a feature matrix with 1500x45 and a target array with 1500 components. The feature matrix is not sparse. When I run the program and I print the arrray "chisq" with 45 components, I can see that the component 13 has a negative value and p = 1. How is it possible? Or what does it mean or what is the big mistake that I am doing? I am attaching the printouts of chisq and p: [ 9.17099260e-01 3.77439701e+00 5.35004211e+01 2.17843312e+03 4.27047184e+04 2.23204883e+01 6.49985540e-01 2.02132664e-01 1.57324454e-03 2.16322638e-01 1.85592258e+00 5.70455805e+00 1.34911126e-02 -1.71834753e+01 1.05112366e+00 3.07383691e-01 5.55694752e-02 7.52801686e-01 9.74807972e-01 9.30619466e-02 4.52669897e-02 1.08348058e-01 9.88146259e-03 2.26292358e-01 5.08579194e-02 4.46232554e-02 1.22740419e-02 6.84545170e-02 6.71339545e-03 1.33252061e-02 1.69296016e-02 3.81318236e-02 4.74945604e-02 1.59313146e-01 9.73037448e-03 9.95771327e-03 6.93777954e-02 3.87738690e-02 1.53693158e-01 9.24603716e-04 1.22473138e-01 2.73347277e-01 1.69060817e-02 1.10868365e-02 8.62029628e+00] ********************** [ 8.21299526e-01 2.86878266e-01 1.43400668e-11 0.00000000e+00 0.00000000e+00 5.59436980e-05 8.84899894e-01 9.77244281e-01 9.99983411e-01 9.74912223e-01 6.02841813e-01 1.26903019e-01 9.99584918e-01 1.00000000e+00 7.88884155e-01 9.58633878e-01 9.96573548e-01 8.60719653e-01 8.07347364e-01 9.92656816e-01 9.97473024e-01 9.90817144e-01 9.99739526e-01 9.73237195e-01 9.96995722e-01 9.97526259e-01 9.99639669e-01 9.95333185e-01 9.99853998e-01 9.99592531e-01 9.99417113e-01 9.98042114e-01 9.97286030e-01 9.83873717e-01 9.99745466e-01 9.99736512e-01 9.95239765e-01 9.97992843e-01 9.84693908e-01 9.99992525e-01 9.89010468e-01 9.64960636e-01 9.99418323e-01 9.99690553e-01 3.47893682e-02]

    Read the article

  • how to update UI controls in cocoa application from background thread

    - by AmitSri
    following is .m code: #import "ThreadLabAppDelegate.h" @interface ThreadLabAppDelegate() - (void)processStart; - (void)processCompleted; @end @implementation ThreadLabAppDelegate @synthesize isProcessStarted; - (void)awakeFromNib { //Set levelindicator's maximum value [levelIndicator setMaxValue:1000]; } - (void)dealloc { //Never called while debugging ???? [super dealloc]; } - (IBAction)startProcess:(id)sender { //Set process flag to true self.isProcessStarted=YES; //Start Animation [spinIndicator startAnimation:nil]; //perform selector in background thread [self performSelectorInBackground:@selector(processStart) withObject:nil]; } - (IBAction)stopProcess:(id)sender { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } - (void)processStart { int counter = 0; while (counter != 1000) { NSLog(@"Counter : %d",counter); //Sleep background thread to reduce CPU usage [NSThread sleepForTimeInterval:0.01]; //set the level indicator value to showing progress [levelIndicator setIntValue:counter]; //increment counter counter++; } //Notify main thread for process completed [self performSelectorOnMainThread:@selector(processCompleted) withObject:nil waitUntilDone:NO]; } - (void)processCompleted { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } @end I need to clear following things as per the above code. How to interrupt/cancel processStart while loop from UI control? I also need to show the counter value in main UI, which i suppose to do with performSelectorOnMainThread and passing argument. Just want to know, is there anyother way to do that? When my app started it is showing 1 thread in Activity Monitor, but when i started the processStart() in background thread its creating two new thread,which makes the total 3 thread until or unless loop get finished.After completing the loop i can see 2 threads. So, my understanding is that, 2 thread created when i called performSelectorInBackground, but what about the thrid thread, from where it got created? What if thread counts get increases on every call of selector.How to control that or my implementation is bad for such kind of requirements? Thanks

    Read the article

  • Access 2007 and Special/Unicode Characters in SQL

    - by blockcipher
    I have a small Access 2007 database that I need to be able to import data from an existing spreadsheet and put it into our new relational model. For the most part this seems to work pretty well. Part of the process is attempting to see if a record already exists in a target table using SQL. For example, if I extract book information out of the current row in the spreadsheet, it may contain a title and abstract. I use SQL to get the ID of a matching record, if it exists. This works fine except when I have data that's in a non-English language. In this case, it seems that there is some punctuation that is causing me problems. At least I think it's punctuation as I do have some fields that do not have punctuation and are non-English that do not give me any problems. Is there a built-in function that can escape these characters? Currently I have a small function that will escape the single quote character, but that isn't enough. Or, is there a list of Unicode characters that can interfere with how SQL wants data quoted? Thanks in advance.

    Read the article

  • Testing ActionFilterAttributes with MSpec

    - by Tomas Lycken
    I'm currently trying to grasp MSpec, mainly to learn new ways of (T/B)DD to be able to make an educated decision on which technology to use. Previously, I've mostly (read: only) used the built-in MSTest framework with Moq, so BDD is quite new for me. I'm writing an ASP.NET MVC app, and I want to implement PRG. Last time I did this, I used action filters to export and import ModelState via TempData, so that I could return a RedirectResult and the validation errors would still be there when the user got the view. I tested that scenario by verifying two things: a) That the ExportModelStateAttribute I had written was applied (among tests for my controller) b) That the attribute worked (among tests for action filter attributes) However, in BDD I've understood I should be even more concerned with behavior, and even less with implementation. This means I should probably just verify that the model state is in tempdata when the action has finished executing - not necessarily that it's done via an attribute. To further complicate things, attributes are not run when calling the action directly in the test, so I can't just call the action and see if the job's been done. How should I spec/test this in MSpec?

    Read the article

  • Downloading a picture via urllib and python.

    - by Mike
    So I'm trying to make a Python script that downloads webcomics and puts them in a folder on my desktop. I've found a few similar programs on here that do something similar, but nothing quite like what I need. The one that I found most similar is right here (http://bytes.com/topic/python/answers/850927-problem-using-urllib-download-images). I tried using this code: >>> import urllib >>> image = urllib.URLopener() >>> image.retrieve("http://www.gunnerkrigg.com//comics/00000001.jpg","00000001.jpg") ('00000001.jpg', <httplib.HTTPMessage instance at 0x1457a80>) I then searched my computer for a file "00000001.jpg", but all I found was the cached picture of it. I'm not even sure it saved the file to my computer. Once I understand how to get the file downloaded, I think I know how to handle the rest. Essentially just use a for loop and split the string at the '00000000'.'jpg' and increment the '00000000' up to the largest number, which I would have to somehow determine. Any reccomendations on the best way to do this or how to download the file correctly? Thanks!

    Read the article

  • django: How to make one form from multiple models containing foreignkeys

    - by Tim
    I am trying to make a form on one page that uses multiple models. The models reference each other. I am having trouble getting the form to validate because I cant figure out how to get the id of two of the models used in the form into the form to validate it. I used a hidden key in the template but I cant figure out how to make it work in the views My code is below: views: def the_view(request, a_id,): if request.method == 'POST': b_form= BForm(request.POST) c_form =CForm(request.POST) print "post" if b_form.is_valid() and c_form.is_valid(): print "valid" b_form.save() c_form.save() return HttpResponseRedirect(reverse('myproj.pro.views.this_page')) else: b_form= BForm() c_form = CForm() b_ide = B.objects.get(pk=request.b_id) id_of_a = A.objects.get(pk=a_id) return render_to_response('myproj/a/c.html', {'b_form':b_form, 'c_form':c_form, 'id_of_a':id_of_a, 'b_id':b_ide }) models class A(models.Model): name = models.CharField(max_length=256, null=True, blank=True) classe = models.CharField(max_length=256, null=True, blank=True) def __str__(self): return self.name class B(models.Model): aid = models.ForeignKey(A, null=True, blank=True) number = models.IntegerField(max_length=1000) other_number = models.IntegerField(max_length=1000) class C(models.Model): bid = models.ForeignKey(B, null=False, blank=False) field_name = models.CharField(max_length=15) field_value = models.CharField(max_length=256, null=True, blank=True) forms from mappamundi.mappa.models import A, B, C class BForm(forms.ModelForm): class Meta: model = B exclude = ('aid',) class CForm(forms.ModelForm): class Meta: model = C exclude = ('bid',) B has a foreign key reference to A, C has a foreign key reference to B. Since the models are related, I want to have the forms for them on one page, 1 submit button. Since I need to fill out fields for the forms for B and C & I dont want to select the id of B from a drop down list, I need to somehow get the id of the B form into the form so it will validate. I have a hidden field in the template, I just need to figure how to do it in the views

    Read the article

  • MS Query returns data inside itself but does not export it to Excel

    - by kappa
    Hi, I'm having a strange problem with Excel and MS Query: I'm using MS Query to run a T-SQL query against a Microsoft SQL Server 2000 and return the results to Excel. To do this, I open Excel, go to Data - Import external data - New database query, select my data source, paste the SQL script in MS Query and click File - Return data to Microsoft Office Excel, leaving all the query options to their defaults. This works fine for many other Excel files, but this time although MS Query shows the correct data when I paste the SQL script, after returning to Excel all I get is the query name in the upper left cell, with no data returned. I fear the cause could be the SQL script, as it contains some advanced functions like union all, UDFs and variables. Here's the script: declare @date smalldatetime set @date = dateadd(day, datediff(day, 0, getdate()), 0) select [date], sum([hours]) as [hours] from ( select [date], [hours] from [server].[dbo].[udf] (84, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (89, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (93, '2010-01-01', @date) ) as [a] group by [date] order by [date] asc I can't get rid of the UDF as inside them are done advanced groupings involving cursors and temporary tables, nor I can remove the variable as the UDF won't accept dateadd(day, datediff(day, 0, getdate()), 0) as parameter. Any ideas? Thanks in advance, Andrea.

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >