Search Results

Search found 8258 results on 331 pages for 'debug symbols'.

Page 69/331 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Backwards compatibility when using Core Data

    - by Alex
    Could anybody shed some light as to why is my app crashing with the following error on iPhone OS 2.2.1 dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation I have weak linked CoreData.framework, and have the Base SDK set to 3.0 and Deployment Target set to SDK 2.2 The app already uses other 3.0 features when available and I did not have any problems with those. But apparently the backward-compatibility methods used for other features do not work with Core Data. The app crashes before app delegate's applicationDidFinishLaunching gets called. Here's the debugger log: [Session started at 2010-05-25 20:17:03 -0400.] GNU gdb 6.3.50-20050815 (Apple version gdb-1119) (Thu May 14 05:35:37 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 Loading program into debugger… sharedlibrary apply-load-rules all warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-12038-42 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 10755] [Switching to thread 10755] Re-enabling shared library breakpoint 1 Re-enabling shared library breakpoint 2 Re-enabling shared library breakpoint 3 Re-enabling shared library breakpoint 4 Re-enabling shared library breakpoint 5 (gdb) continue warning: Unable to read symbols for ""/Users/alex/iPhone Projects/AppName/build/Debug-iphoneos"/AppName.app/AppName" (file not found). dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation (gdb)

    Read the article

  • xcode 5.1: libCordova.a architecture problems

    - by inorganik
    Yesterday (3/10/14) when iOS 7.1 was released I also upgraded to Xcode 5.1 and found that my PhoneGap/Cordova project would no longer compile to my iPhone 5s. I also upgraded Cordova to the most recent release: v 3.4.0-0.1.3. I have read many different solutions on SO that relate so changing active architectures and building only active architectures, and none of them work. So here's what I've tried and the errors I get. Initially I got the error: missing required architecture arm64 in file <long file path omitted> libCordova.a Undefined symbols for architecture arm64 So I tried the following. I selected the CordovaLib sub-project in my project, and in both the project and target, I went to Build Settings under Architectures and made sure that arm64 was not included in any of the Debug or Release architectures. At this time Build Active Architecture Only is set to "Yes". That resulted in the following error: file was built for archive which is not the architecture being linked (armv7): <long file path omitted> libCordova.a Undefined symbols for architecture armv7 Setting Build Active Architecture Only to "No", the error again becomes: missing required architecture arm64 in file <long file path omitted> libCordova.a Undefined symbols for architecture arm64 I'm not sure what else to try. The project's architecture settings only includes the key "Base SDK" which is set to iOS 7.1. The project's target does not have architectures settings. Anyway I'm fairly certain the problem lies with the embedded CordovaLib sub-project. What can I do to make this thing compile to my device successfully? Update: same issue on Apache's Jira: https://issues.apache.org/jira/browse/CB-6223

    Read the article

  • Netbeans GUI building on pre-defined code

    - by deliriumtremens
    I am supposed edit some code for an assignment, and he gave us the framework and wants us to implement code for it. I load the project into netbeans and can't figure out how I'm supposed to edit the swing components. I don't see how to edit source vs. design. import javax.swing.*; import java.util.*; import java.io.*; public class CurrencyConverterGUI extends javax.swing.JFrame { /************************************************************************************************************** insert your code here - most of this will be generated by NetBeans, however, you must write code for the event listeners and handlers for the two ComboBoxes, the two TextBoxes, and the Button. Please note you must also poulate the ComboBoxes withe currency symbols (which are contained in the KeyList attribute of CurrencyConverter CC) ***************************************************************************************************************/ private CurrencyConverter CC; private javax.swing.JTextField Currency1Field; private javax.swing.JComboBox Currency1List; private javax.swing.JTextField Currency2Field; private javax.swing.JComboBox Currency2List; private javax.swing.JButton jButton1; private javax.swing.JPanel jPanel1; } class CurrencyConverter{ private HashMap HM; // contains the Currency symbols and conversion rates private ArrayList KeyList; // contains the list of currency symbols public CurrencyConverter() { /************************************************** Instantiate HM and KeyList and load data into them. Do this by reading the data from the Rates.txt file ***************************************************/ } public double convert(String FromCurrency, String ToCurrency, double amount){ /*************************************************************************** Will return the converted currency value. For example, to convert 100 USD to GBP, FromCurrency is USD, ToCurrency is GBP and amount is 100. The rate specified in the file represent the amount of each currency which is equivalent to one Euro (EUR). Therefore, 1 Euro is equivalent to 1.35 USD Use the rate specified for USD to convert to equivalent GBP: amount / USD_rate * GBP_rate ****************************************************************************/ } public ArrayList getKeys(){ // return KeyList } } This is what we were given, but I can't do anything with it inside the GUI editor. (Can't even get to the GUI editor). I have been staring at this for about an hour. Any ideas?

    Read the article

  • Any way to turn off quips in OOWeb?

    - by Misha Koshelev
    http://ooweb.sourceforge.net/tutorial.html Not really a question, but I can't seem to stop writing stuff like this. Maybe someone will find it useful. I know rewriting an HTTP server is not the way to turn off the quips ;) /* Copyright 2010 Misha Koshelev. All Rights Reserved. */ package com.mksoft.common; import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.IOException; import java.io.PrintWriter; import java.io.UnsupportedEncodingException; import java.net.URLDecoder; import java.text.SimpleDateFormat; import java.util.Date; import java.util.LinkedHashMap; import java.net.ServerSocket; import java.net.Socket; /** * Simple HTTP Server. * * @author Misha Koshelev */ public class HttpServer extends Thread { /* * Constants */ /** * 404 Not Found Result */ protected final static String result404NotFound="<html><head><title>404 Not Found</title></head><body bgcolor='#ffffff'><h1>404 Not Found</h1></body></html>"; /* * Variables */ /** * Port on which HTTP server handles requests. */ protected int port; public int getPort() { return port; } public void setPort(int _port) { port=_port; } /* * Constructors */ public HttpServer(int _port) { setPort(_port); } /* * Helpers */ /** * Errors */ protected void error(String message) { System.err.println(message); System.err.flush(); } /** * Debugging */ protected boolean debugOutput=true; protected void debug(String message) { if (debugOutput) { error(message); } } /** * Lock object */ private Object lock=new Object(); /** * Should we quit? */ protected boolean doQuit=false; /** * Are we done? */ protected boolean areWeDone=false; /** * Process POST request headers */ protected String processPostRequest(String url,LinkedHashMap<String,String> headers,String inputLine) { debug("HttpServer.processPostRequest: url=\""+url); if (debugOutput) { for (String key: headers.keySet()) { debug("HttpServer.processPostRequest: headers."+key+"=\""+headers.get(key)+"\""); } } debug("HttpServer.processPostRequest: inputLine=\""+inputLine+"\""); try { inputLine=new URLDecoder().decode(inputLine,"UTF-8"); } catch (UnsupportedEncodingException uee) { uee.printStackTrace(); } String[] keyValues=inputLine.split("&"); LinkedHashMap<String,String> post=new LinkedHashMap<String,String>(); for (int i=0;i<keyValues.length;i++) { String keyValue=keyValues[i]; int equals=keyValue.indexOf('='); String key=keyValue.substring(0,equals); String value=keyValue.substring(equals+1); post.put(key,value); } return post(url,headers,post); } /** * Server loop (here for exception handling purposes) */ protected void serverLoop() throws IOException { /* Start server socket */ ServerSocket serverSocket=null; try { serverSocket=new ServerSocket(getPort()); } catch (IOException ioe) { ioe.printStackTrace(); System.exit(1); } Socket clientSocket=null; while (true) { /* Quit if necessary */ if (doQuit) { break; } /* Accept incoming connections */ try { clientSocket=serverSocket.accept(); } catch (IOException ioe) { ioe.printStackTrace(); System.exit(1); } /* Read request */ BufferedReader in=null; String inputLine=null; String firstLine=null; String blankLine=null; LinkedHashMap<String,String> headers=new LinkedHashMap<String,String>(); try { in=new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); while (true) { if (blankLine==null) { inputLine=in.readLine(); } else { /* POST request, read Content-length bytes */ int contentLength=new Integer(headers.get("Content-Length")).intValue(); StringBuilder sb=new StringBuilder(contentLength); for (int i=0;i<contentLength;i++) { sb.append((char)in.read()); } inputLine=sb.toString(); break; } if (firstLine==null) { firstLine=inputLine; } else if (blankLine==null) { if (inputLine.equals("")) { if (firstLine.startsWith("GET ")) { break; } blankLine=inputLine; } else { int colon=inputLine.indexOf(": "); String key=inputLine.substring(0,colon); String value=inputLine.substring(colon+2); headers.put(key,value); } } } } catch (IOException ioe) { ioe.printStackTrace(); } /* Process request */ String result=null; firstLine=firstLine.replaceAll(" HTTP/.*",""); if (firstLine.startsWith("GET ")) { result=get(firstLine.replaceFirst("GET ",""),headers); } else if (firstLine.startsWith("POST ")) { result=processPostRequest(firstLine.replaceFirst("POST ",""),headers,inputLine); } else { error("HttpServer.ServerLoop: Unhandled request \""+firstLine+"\""); } debug("HttpServer.ServerLoop: result=\""+result+"\""); /* Send response */ PrintWriter out=null; try { out=new PrintWriter(clientSocket.getOutputStream(),true); } catch (IOException ioe) { ioe.printStackTrace(); } if (result!=null) { out.println("HTTP/1.1 200 OK"); } else { out.println("HTTP/1.0 404 Not Found"); result=result404NotFound; } Date now=new Date(); out.println("Date: "+new SimpleDateFormat("EEE, d MMM yyyy HH:mm:ss z").format(now)); out.println("Content-Type: text/html; charset=UTF-8"); out.println("Content-Length: "+result.length()); out.println(""); out.print(result); /* Clean up */ out.close(); if (in!=null) { in.close(); } clientSocket.close(); } serverSocket.close(); areWeDone=true; synchronized(lock) { lock.notifyAll(); } } /* * Methods */ /** * Run server on port specified in constructor. */ public void run() { try { serverLoop(); } catch (IOException ioe) { ioe.printStackTrace(); System.exit(1); } } /** * Process GET request (should be overwritten). */ public String get(String url,LinkedHashMap<String,String> headers) { debug("HttpServer.get: url=\""+url+"\""); if (debugOutput) { for (String key: headers.keySet()) { debug("HttpServer.get: headers."+key+"=\""+headers.get(key)+"\""); } } if (url.equals("/")) { return "<html><head><title>HttpServer GET Test Page</title></head>\r\n"+ "<body bgcolor='#ffffff'>\r\n"+ "<center><h1>HttpServer GET Test Page</h1></center>\r\n"+ "<hr />\r\n"+ "<center><table>\r\n"+ "<form method='post' action='/'>\r\n"+ "<tr><td align=right>Test 1:</td>\r\n"+ " <td><input type='text' name='text 1' value='test me !!! !@#$'></td></tr>\r\n"+ "<tr><td align=right>Test 2:</td>\r\n"+ " <td><input type='text' name='text 2' value='type smthng'></td></tr>\r\n"+ "<tr><td>&nbsp;</td>\r\n"+ " <td align=right><input type='submit' value='Submit'></td></tr>\r\n"+ "</form>\r\n"+ "</table></center>\r\n"+ "<hr />\r\n"+ "<center><a href='/quit'>Shutdown Server</a></center>\r\n"+ "</html>"; } else if (url.equals("/quit")) { quit(); return ""; } else { return null; } } /** * Process POST request (should be overwritten). */ public String post(String url,LinkedHashMap<String,String> headers,LinkedHashMap<String,String> post) { debug("HttpServer.post: url=\""+url+"\""); if (debugOutput) { for (String key: headers.keySet()) { debug("HttpServer.post: headers."+key+"=\""+headers.get(key)+"\""); } } if (url.equals("/")) { String result="<html><head><title>HttpServer Post Test Page</title></head>\r\n"+ "<body bgcolor='#ffffff'>\r\n"+ "<center><h1>HttpServer Post Test Page</h1></center>\r\n"+ "<hr />\r\n"+ "<center><table>\r\n"+ "<tr><th>Key</th><th>Value</th></tr>\r\n"; for (String key: post.keySet()) { result+="<tr><td align=right>"+key+"</td><td align=left>"+post.get(key)+"</td></tr>\r\n"; } result+="</table></center>\r\n"+ "</html>"; return result; } else { return null; } } /** * Wait for server to quit. */ public void waitForCompletion() { while (areWeDone==false) { synchronized(lock) { try { lock.wait(); } catch (InterruptedException ie) { } } } } /** * Shutdown server. */ public void quit() { doQuit=true; } }

    Read the article

  • My iPhone app ran fine in simulator but crashed on device (iPod touch 3.1.2) test, I got the followi

    - by Mickey Shine
    I was running myapp on an iPod touch and I noticed it missed some libraries. Is that the reason? [Session started at 2010-03-19 15:57:04 +0800.] GNU gdb 6.3.50-20050815 (Apple version gdb-1128) (Fri Dec 18 10:08:53 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys007 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-237-78 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 11779] [Switching to thread 11779] sharedlibrary apply-load-rules all (gdb) continue warning: Unable to read symbols for "/Library/MobileSubstrate/MobileSubstrate.dylib" (file not found). 2010-03-19 15:57:18.892 myapp[2338:207] MS:Notice: Installing: com.yourcompany.myapp [myapp] (478.52) 2010-03-19 15:57:19.145 myapp[2338:207] MS:Notice: Loading: /Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib warning: Unable to read symbols for "/Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libsubstrate.dylib" (file not found). MS:Warning: message not found [myappAppDelegate applicationWillResignActive:] MS:Warning: message not found [myappAppDelegate applicationDidBecomeActive:] 2010-03-19 15:57:19.550 myapp[2338:207] in FirstViewController 2010-03-19 15:57:20.344 myapp[2338:207] in load table view 2010-03-19 15:57:20.478 myapp[2338:207] in loading splash view 2010-03-19 15:57:22.793 myapp[2338:207] in set interface Program received signal: “0”. warning: check_safe_call: could not restore current frame

    Read the article

  • string maniupulations, oops, how do I replace parts of a string

    - by Joe Gibson
    I am very new to python. Could someone explain how I can manipulate a string like this? This function receives three inputs: complete_fmla: has a string with digits and symbols but has no hyphens ('-') nor spaces. partial_fmla: has a combination of hyphens and possibly some digits or symbols, where the digits and symbols that are in it (other than hyphens) are in the same position as in the complete_formula. symbol: one character The output that should be returned is: If the symbol is not in the complete formula, or if the symbol is already in the partial formula, the function should return the same formula as the input partial_formula. If the symbol is in the complete_formula and not in the partial formula, the function should return the partial_formula with the symbol substituting the hyphens in the positions where the symbol is, in all the occurrences of symbol in the complete_formula. For example: generate_next_fmla (‘abcdeeaa’, ‘- - - - - - - - ’, ‘d’) should return ‘- - - d - - - -’ generate_next_fmla (‘abcdeeaa’, ‘- - - d - - - - ’, ‘e’) should return ‘- - - d e e - -’ generate_next_fmla (‘abcdeeaa’, ‘- - - d e e - - ’, ‘a’) should return ‘a - - d e e a a’ Basically, I'm working with the definition: def generate_next_fmla (complete_fmla, partial_fmla, symbol): Do I turn them into lists? and then append? Also, should I find out the index number for the symbol in the complete_fmla so that I know where to append it in the string with hyphens??

    Read the article

  • Randomly sorting an array

    - by Cam
    Does there exist an algorithm which, given an ordered list of symbols {a1, a2, a3, ..., ak}, produces in O(n) time a new list of the same symbols in a random order without bias? "Without bias" means the probability that any symbol s will end up in some position p in the list is 1/k. Assume it is possible to generate a non-biased integer from 1-k inclusive in O(1) time. Also assume that O(1) element access/mutation is possible, and that it is possible to create a new list of size k in O(k) time. In particular, I would be interested in a 'generative' algorithm. That is, I would be interested in an algorithm that has O(1) initial overhead, and then produces a new element for each slot in the list, taking O(1) time per slot. If no solution exists to the problem as described, I would still like to know about solutions that do not meet my constraints in one or more of the following ways (and/or in other ways if necessary): the time complexity is worse than O(n). the algorithm is biased with regards to the final positions of the symbols. the algorithm is not generative. I should add that this problem appears to be the same as the problem of randomly sorting the integers from 1-k, since we can sort the list of integers from 1-k and then for each integer i in the new list, we can produce the symbol ai.

    Read the article

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • Add something to symbol in dynamicly loaded swf (ActionScript 3)

    - by user1468671
    I have a program written in Flash Builder with Flex 4.6 sdk and swf movie with some symbols inside. Those symbols moving around the stage. What I need is load that swf in my program and replace one of those symbols to my bitmap and show whole swf in flashContainer. There is my code for now: var swfLoader:Loader = new Loader(); var bgUrl:URLRequest = new URLRequest("testMovie.swf"); swfLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, function(event: Event) : void { var movie: MovieClip = event.target.content; var headClass: Class = movie.loaderInfo.applicationDomain.getDefinition("headSymbol") as Class; var head:MovieClip = new headClass() as MovieClip; head.addChild(bmp); flashContainer.source = movie; }); but in flashContainer showed old movie. If I do flashContainer.source = head; then only head with my bmp appears. Need help. And sorry for my bad English.

    Read the article

  • My iPhone app ran fine in simulator but crashed in device (iPod touch 3.1.2) test, I got the followi

    - by Mickey Shine
    I was running myapp on an iPod touch and I noticed it missed some libraries. Is that the reason? [Session started at 2010-03-19 15:57:04 +0800.] GNU gdb 6.3.50-20050815 (Apple version gdb-1128) (Fri Dec 18 10:08:53 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys007 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-237-78 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 11779] [Switching to thread 11779] sharedlibrary apply-load-rules all (gdb) continue warning: Unable to read symbols for "/Library/MobileSubstrate/MobileSubstrate.dylib" (file not found). 2010-03-19 15:57:18.892 myapp[2338:207] MS:Notice: Installing: com.yourcompany.myapp [myapp] (478.52) 2010-03-19 15:57:19.145 myapp[2338:207] MS:Notice: Loading: /Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib warning: Unable to read symbols for "/Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libsubstrate.dylib" (file not found). MS:Warning: message not found [myappAppDelegate applicationWillResignActive:] MS:Warning: message not found [myappAppDelegate applicationDidBecomeActive:] 2010-03-19 15:57:19.550 myapp[2338:207] in FirstViewController 2010-03-19 15:57:20.344 myapp[2338:207] in load table view 2010-03-19 15:57:20.478 myapp[2338:207] in loading splash view 2010-03-19 15:57:22.793 myapp[2338:207] in set interface Program received signal: “0”. warning: check_safe_call: could not restore current frame

    Read the article

  • My iPhone app runs fine in simulator but quit in device (iPod touch 3.1.2) test, I got the following

    - by Mickey Shine
    I was running myapp on an iPod touch and I noticed it missed some libraries. Is that the reason? [Session started at 2010-03-19 15:57:04 +0800.] GNU gdb 6.3.50-20050815 (Apple version gdb-1128) (Fri Dec 18 10:08:53 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys007 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-237-78 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 11779] [Switching to thread 11779] sharedlibrary apply-load-rules all (gdb) continue warning: Unable to read symbols for "/Library/MobileSubstrate/MobileSubstrate.dylib" (file not found). 2010-03-19 15:57:18.892 myapp[2338:207] MS:Notice: Installing: com.yourcompany.myapp [myapp] (478.52) 2010-03-19 15:57:19.145 myapp[2338:207] MS:Notice: Loading: /Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib warning: Unable to read symbols for "/Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libsubstrate.dylib" (file not found). MS:Warning: message not found [myappAppDelegate applicationWillResignActive:] MS:Warning: message not found [myappAppDelegate applicationDidBecomeActive:] 2010-03-19 15:57:19.550 myapp[2338:207] in FirstViewController 2010-03-19 15:57:20.344 myapp[2338:207] in load table view 2010-03-19 15:57:20.478 myapp[2338:207] in loading splash view 2010-03-19 15:57:22.793 myapp[2338:207] in set interface Program received signal: “0”. warning: check_safe_call: could not restore current frame

    Read the article

  • Hiding instantiated templates in shared library created with g++

    - by jchl
    I have a file that contains the following: #include <map> class A {}; void doSomething() { std::map<int, A> m; } When compiled into a shared library with g++, the library contains dynamic symbols for all the methods of std::map<int, A>. Since A is private to this file, there is no possibility that std::map will be instantiated in any other shared library with the same parameters, so I'd like to make the template instantiation hidden (for some of the reasons described in this document). I thought I should be able to do this by adding an explicit instantiation of the template class and marking it as hidden, like so: #include <map> class A {}; template class __attribute__((visibility ("hidden"))) std::map<int, A>; void doSomething() { std::map<int, A> m; } However, this has no effect: the symbols are still all exported. I even tried compiling with -fvisibility=hidden, but this also has no effect on the visibility of the methods of std::map<int, A> (although it does hide doSomething). The document I linked to above describes the use of export maps to restrict visibility, but that seems very tedious. Is there a way to do what I want in g++ (other than using export maps)? If so, what is it? If not, is there a good reason why these symbols must always be exported, or is this just a omission in g++?

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Android "java.lang.noclassdeffounderror" exception

    - by wpbnewbie
    Hello, I have a android webservice client application. I am trying to use the java standard WS library support. I have stripped the application down to the minimum, as shown below, to try and isolate the issue. Below is the applicaiton, package fau.edu.cse; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class ClassMap extends Activity { TextView displayObject; @Override public void onCreate(Bundle savedInstanceState) { // Build Screen Display String String screenString = "Program Started\n\n"; // Set up the display super.onCreate(savedInstanceState); setContentView(R.layout.main); displayObject = (TextView)findViewById(R.id.TextView01); screenString = screenString + "Inflate Disaplay\n\n"; try { // Set up Soap Service TempConvertSoap service = new TempConvert().getTempConvertSoap(); // Successful Soap Object Build screenString = screenString + "SOAP Object Correctly Build\n\n"; // Display Response displayObject.setText(screenString); } catch(Throwable e){ e.printStackTrace(); displayObject.setText(screenString +"Try Error...\n" + e.toString()); } } } The classes tempConvert and tempConvertSoap are in the package fau.edu.cse. I have included the java SE javax libraries in the java build pasth. When the android application tries to create the "service" object I get a "java.lang.noclassdeffounderror" exception. The two classes tempConvertSoap and TempConvet() are generated by wsimport. I am also using several libraries from javax.jws.. and javax.xml.ws.. Of course the application compiles without error and loads correctly. I know the application is running becouse my "try/catch" routine is successfully catching the error and printing it out. Here is what is in the logcat says (notice that it cannot find TempConvert), 06-12 22:58:39.340: WARN/dalvikvm(200): Unable to resolve superclass of Lfau/edu/cse/TempConvert; (53) 06-12 22:58:39.340: WARN/dalvikvm(200): Link of class 'Lfau/edu/cse/TempConvert;' failed 06-12 22:58:39.340: ERROR/dalvikvm(200): Could not find class 'fau.edu.cse.TempConvert', referenced from method fau.edu.cse.ClassMap.onCreate 06-12 22:58:39.340: WARN/dalvikvm(200): VFY: unable to resolve new-instance 21 (Lfau/edu/cse/TempConvert;) in Lfau/edu/cse/ClassMap; 06-12 22:58:39.340: DEBUG/dalvikvm(200): VFY: replacing opcode 0x22 at 0x0027 06-12 22:58:39.340: DEBUG/dalvikvm(200): Making a copy of Lfau/edu/cse/ClassMap;.onCreate code (252 bytes) 06-12 22:58:39.490: DEBUG/dalvikvm(30): GC freed 2 objects / 48 bytes in 273ms 06-12 22:58:39.530: DEBUG/ddm-heap(119): Got feature list request 06-12 22:58:39.620: WARN/Resources(200): Converting to string: TypedValue{t=0x12/d=0x0 a=2 r=0x7f050000} 06-12 22:58:39.620: WARN/System.err(200): java.lang.NoClassDefFoundError: fau.edu.cse.TempConvert 06-12 22:58:39.830: WARN/System.err(200): at fau.edu.cse.ClassMap.onCreate(ClassMap.java:26) 06-12 22:58:39.830: WARN/System.err(200): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Handler.dispatchMessage(Handler.java:99) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Looper.loop(Looper.java:123) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread.main(ActivityThread.java:4363) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invokeNative(Native Method) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invoke(Method.java:521) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 06-12 22:58:39.880: WARN/System.err(200): at dalvik.system.NativeStart.main(Native Method) ...bla...bla...bla It would be great if someone just had an answer, however I am looking at debug strategies. I have taken this same application and created a standard java client application and it works fine -- of course with all of the android stuff taken out. What would be a good debug strategy? What methods and techniques would you recommend I try and isolate the problem? I am thinking that there is some sort of Dalvik VM incompatibility that is causing the TempConvert class not to load. TempConvert is an interface class that references a lot of very tricky webservice attributes. Any help with debug strategies would be gladly appreciated. Thanks for the help, Steve

    Read the article

  • Getting Rails Application Running Under IronRuby Rack

    - by NotMyself
    Anyone else playing with ironruby? I have successfully got the IronRuby.Rails.Example project running on my local machine under IIS 5.1. I am now attempting to get my own demo rails site running in the same way. My web.config is slightly different from the example project. I am attempting to only use what was distributed with IronRuby 1.0 and what I have installed using gems. I am getting the following error which doesn't give me a lot to go on: D:/demo/config/boot.rb:66:in `exit': exit (SystemExit) After trying many different things, I think it is having a problem finding gems. I have attached my web config and ironrack.log. Does anyone have pointers on what I am doing wrong? Thanks! <?xml version="1.0"?> <configuration> <configSections> <!-- custom configuration section for DLR hosting --> <section name="microsoft.scripting" type="Microsoft.Scripting.Hosting.Configuration.Section, Microsoft.Scripting" requirePermission="false"/> </configSections> <system.webServer> <handlers> <!-- clear all other handlers first. Don't do this if you have other handlers you want to run --> <clear/> <!-- This hooks up the HttpHandler which will dispatch all requests to Rack --> <add name="IronRuby" path="*" verb="*" type="IronRuby.Rack.HttpHandlerFactory, IronRuby.Rack" resourceType="Unspecified" requireAccess="Read" preCondition="integratedMode"/> </handlers> </system.webServer> <system.web> <!-- make this true if you want to debug any of the DLR code, IronRuby.Rack, or your own managed code --> <compilation debug="true"/> <httpHandlers> <!-- clear all other handlers first. Don't do this if you have other handlers you want to run --> <clear/> <!-- This hooks up the HttpHandler which will dispatch all requests to Rack --> <add path="*" verb="*" type="IronRuby.Rack.HttpHandlerFactory, IronRuby.Rack" /> </httpHandlers> </system.web> <!-- DLR configuration. Set debugMode to "true" if you want to debug your dynamic language code with VS --> <microsoft.scripting debugMode="false"> <options> <!-- Library paths: make sure these paths are correct --> <!--<set option="LibraryPaths" value="..\..\..\Languages\Ruby\libs\; ..\..\..\..\External.LCA_RESTRICTED\Languages\Ruby\ruby-1.8.6p368\lib\ruby\site_ruby\1.8\; ..\..\..\..\External.LCA_RESTRICTED\Languages\Ruby\ruby-1.8.6p368\lib\ruby\1.8\"/>--> <set option="LibraryPaths" value="C:\IronRuby\lib\IronRuby;C:\IronRuby\lib\ruby\1.8;C:\IronRuby\lib\ruby\site_ruby;C:\IronRuby\lib\ruby\site_ruby\1.8"/> </options> </microsoft.scripting> <appSettings> <add key="AppRoot" value="."/> <add key="Log" value="ironrack.log"/> <!-- <add key="GemPath" value="..\..\..\..\External.LCA_RESTRICTED\Languages\Ruby\ruby-1.8.6p368\lib\ruby\gems\1.8"/> --> <add key="GemPath" value="C:\IronRuby\Lib\ironruby\gems\1.8\gems"/> <add key="RackEnv" value="production"/> </appSettings> </configuration> === Booting ironruby-rack at 4/15/2010 1:27:12 PM [DEBUG] >>> TOPLEVEL_BINDING = binding => Setting GEM_PATH: 'C:\\IronRuby\\Lib\\ironruby\\gems\\1.8\\gems' => Setting RACK_ENV: 'production' => Loading RubyGems [DEBUG] >>> require 'rubygems' => Loading Rack >=1.0.0 [DEBUG] >>> gem 'rack', '>=1.0.0';require 'rack' => Loaded rack-1.1 => Application root: 'D:\\demo' => Loading Rack application [DEBUG] >>> Rack::Builder.new { ( require "config/environment" ENV['RAILS_ENV'] = 'development' use Rails::Rack::LogTailer use Rails::Rack::Static run ActionController::Dispatcher.new ) }.to_app exit D:/demo/config/boot.rb:66:in `exit': exit (SystemExit) from D:/demo/config/boot.rb:66:in `load_rails_gem' from D:/demo/config/boot.rb:54:in `load_initializer' from D:/demo/config/boot.rb:38:in `run' from D:/demo/config/boot.rb:11:in `boot!' from D:/demo/config/boot.rb:110 from C:/IronRuby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from C:/IronRuby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from D:/demo/config/environment.rb:7 from C:/IronRuby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from C:/IronRuby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from (eval):1 from C:/IronRuby/lib/ironruby/gems/1.8/gems/rack-1.1.0/lib/rack/builder.rb:46:in `instance_eval' from C:/IronRuby/lib/ironruby/gems/1.8/gems/rack-1.1.0/lib/rack/builder.rb:46:in `initialize' from (eval):0 from D:\Dev\ironruby\ironruby-ironruby-20bc41b\Merlin\Main\Hosts\IronRuby.Rack\RubyEngine.cs:52:in `Execute' from D:\Dev\ironruby\ironruby-ironruby-20bc41b\Merlin\Main\Hosts\IronRuby.Rack\RubyEngine.cs:45:in `Execute' from D:\Dev\ironruby\ironruby-ironruby-20bc41b\Merlin\Main\Hosts\IronRuby.Rack\Application.cs:68:in `Rackup' from D:\Dev\ironruby\ironruby-ironruby-20bc41b\Merlin\Main\Hosts\IronRuby.Rack\Application.cs:32:in `.ctor' from D:\Dev\ironruby\ironruby-ironruby-20bc41b\Merlin\Main\Hosts\IronRuby.Rack\HttpHandlerFactory.cs:37:in `GetHandler' from System.Web:0:in `MapHttpHandler' from System.Web:0:in `System.Web.HttpApplication.IExecutionStep.Execute' from System.Web:0:in `ExecuteStep' from System.Web:0:in `ResumeSteps' from System.Web:0:in `System.Web.IHttpAsyncHandler.BeginProcessRequest' from System.Web:0:in `ProcessRequestInternal' from System.Web:0:in `ProcessRequestNoDemand' from System.Web:0:in `ProcessRequest'

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • macport selfupdate not working

    - by eistrati
    macbookpro:~ eistrati$ port -v MacPorts 2.1.2 macbookpro:~ eistrati$ xcodebuild -version Xcode 4.5.2 Build version 4G2008a macbookpro:~ eistrati$ sudo port -d selfupdate DEBUG: Copying /Users/eistrati/Library/Preferences/com.apple.dt.Xcode.plist to /opt/local/var/macports/home/Library/Preferences DEBUG: MacPorts sources location: /opt/local/var/macports/sources/rsync.macports.org/release/tarballs ---> Updating MacPorts base sources using rsync rsync: failed to connect to rsync.macports.org: Connection refused (61) rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync-42/rsync/clientserver.c(105) [receiver=2.6.9] Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/tarballs/base.tar /opt/local/var/macports/sources/rsync.macports.org/release/tarballs Exit code: 10 DEBUG: Error synchronizing MacPorts sources: command execution failed while executing "macports::selfupdate [array get global_options] base_updated" Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed Ideas? Please help!

    Read the article

  • PHP Warning: PHP Startup: Unable to load dynamic library php_mysql.dll, Mac 10.6, Apache 2.2, php 5

    - by munchybunch
    I'm trying to use the PHP CLI, and when I enter something like php test.php in the command line it returns: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/extensions/no-debug-non-zts-20090626/php_mysql.dll' - dlopen(/usr/lib/php/extensions/no-debug-non-zts-20090626/php_mysql.dll, 9): image not found in Unknown on line 0 something test.php contains: <?php echo 'something'; ?> I checked /usr/lib/php/extensions/no-debug-non-zts-20090626/, and as expected the .dll file isn't there. I'm a complete beginner when it comes to this - what is happening, and how can I fix it? A search of my system for "php_msyql.dll" reveals nothing. Does it have to do with how I compiled it? I don't have the original version of php that came with the mac, I think - I may have reinstalled it somewhere along the way. Any help would be appreciated!

    Read the article

  • Help with Windows 7 BSOD with windbg minidump !analyze -v results

    - by Kurt Harless
    Hi gang, Windows 7 X64 Ultimate is BSODing occasionally. I suspect an overheating issue or something related to the use of my GTX-295 card that runs very hot. Here is an !analyze -v listing of the most recent minidump. Any and all help greatly appreciated. Kurt Microsoft (R) Windows Debugger Version 6.12.0002.633 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [C:\Windows\Minidump\122810-31387-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*c:\websymbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16617.amd64fre.win7_gdr.100618-1621 Machine Name: Kernel base = 0xfffff800`03065000 PsLoadedModuleList = 0xfffff800`032a2e50 Debug session time: Tue Dec 28 11:04:03.597 2010 (UTC - 7:00) System Uptime: 2 days 2:28:40.407 Loading Kernel Symbols ............................................................... ................................................................ .............................................. Loading User Symbols Loading unloaded module list ................ ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 3B, {c0000005, fffff800033b8873, fffff8800e322dc0, 0} Probably caused by : ntkrnlmp.exe ( nt!RtlCompareUnicodeStrings+c3 ) Followup: MachineOwner --------- 1: kd> !analyze -v ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* SYSTEM_SERVICE_EXCEPTION (3b) An exception happened while executing a system service routine. Arguments: Arg1: 00000000c0000005, Exception code that caused the bugcheck Arg2: fffff800033b8873, Address of the instruction which caused the bugcheck Arg3: fffff8800e322dc0, Address of the context record for the exception that caused the bugcheck Arg4: 0000000000000000, zero. Debugging Details: ------------------ EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. FAULTING_IP: nt!RtlCompareUnicodeStrings+c3 fffff800`033b8873 488b7c2418 mov rdi,qword ptr [rsp+18h] CONTEXT: fffff8800e322dc0 -- (.cxr 0xfffff8800e322dc0) rax=0000000000000041 rbx=fffff8a015a3c1c0 rcx=0000000000000024 rdx=0000000000000003 rsi=fffff8800e3238b0 rdi=0000000000000009 rip=fffff800033b8873 rsp=fffff8800e323798 rbp=000000000000000d r8=fffff8a018cb374c r9=000000200a98fdc4 r10=fffff8800e323988 r11=fffff8800e32398e r12=fffff8a018127c18 r13=fffff8800126e550 r14=0000000000000001 r15=fffffa800abe1570 iopl=0 nv up ei pl nz ac po nc cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00010216 nt!RtlCompareUnicodeStrings+0xc3: fffff800`033b8873 488b7c2418 mov rdi,qword ptr [rsp+18h] ss:0018:fffff880`0e3237b0=???????????????? Resetting default scope CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT BUGCHECK_STR: 0x3B PROCESS_NAME: ccSvcHst.exe CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from 0000000000000000 to fffff800033b8873 STACK_TEXT: fffff880`0e323798 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!RtlCompareUnicodeStrings+0xc3 FOLLOWUP_IP: nt!RtlCompareUnicodeStrings+c3 fffff800`033b8873 488b7c2418 mov rdi,qword ptr [rsp+18h] SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: nt!RtlCompareUnicodeStrings+c3 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4c1c44a9 STACK_COMMAND: .cxr 0xfffff8800e322dc0 ; kb FAILURE_BUCKET_ID: X64_0x3B_nt!RtlCompareUnicodeStrings+c3 BUCKET_ID: X64_0x3B_nt!RtlCompareUnicodeStrings+c3 Followup: MachineOwner ---------

    Read the article

  • Can I set up a 'Deny from x' that overrides other confs for debugging?

    - by Nick T
    I'm currently working on developing/deploying a Django application on Apache and am often fiddling with the debug settings which alter how Django accepts connections, ignoring or using ALLOWED_HOSTS. If DEBUG is False, it uses them, which is handy to keep up some walls around my construction site. However, the useful info it spits out when True is quite nice. I'm currently just using an SSH tunnel and just allowing localhost when DEBUG is False, but how can I keep everyone out without relying on the aforementioned ALLOWED_HOSTS? Editing the httpd.conf file which is in source control is a bit irritating; I've accidentally committed a few botched configs.

    Read the article

  • Compiling zip component for PHP 5.2.11 in MAMP PRO

    - by Zlatoroh
    Helo I installed MAMP PRO on my Macbook Pro (10.6) some time ago. Now I would like to use zip functions in php. I found that I must add zip.so to my extension folder and edited php.ini. On my computer I have two different versions of PHP one in MAMP folder and other in user/lib which was pre-installed on my system. Now I wish to compile my zip library for MAMP version. I got zip sources for my version of PHP then in terminal called function /Applications/MAMP/bin/php5/bin/phpize so it uses mamp php version ./configure make then I moved compile zip.so to extensions/no-debug-non-zts-20060613. When MAMP is launched it returns this error: [11-Apr-2010 16:33:27] PHP Warning: PHP Startup: zip: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 Can some body explain to me how to do this the right way.

    Read the article

  • Visual Studio 2008 Debugger really slow with SSD

    - by Doug
    Hey guys, my setup is a laptop with Win 7 64bit, VS2008 SP1 on an intel x25-m 4gb RAM with page file turned off (no need) and 2.2 core duo What happens is weird: When i build my project and attach the debugger the symbols load REALLY slowly... like 1 every 5 secs Sometimes the symbols will fail to load at all. This is driving me crazy as this was a freshly installed win 7 box with default VS installation, working on ASP.net web applications... i've never had to use symbol servers or any of that jazz so i'm quite frustrated. With this SSD it should breath fire as it does with loading and doing everything else. Am i missing something?

    Read the article

  • Compiling zip component for PHP 5.2.11 in MAMP PRO

    - by Zlatoroh
    I installed MAMP PRO on my Macbook Pro (10.6) some time ago. Now I would like to use zip functions in php. I found that I must add zip.so to my extension folder and edited php.ini. On my computer I have two different versions of PHP one in MAMP folder and other in user/lib which was pre-installed on my system. Now I wish to compile my zip library for MAMP version. I got zip sources for my version of PHP then in terminal called function /Applications/MAMP/bin/php5/bin/phpize so it uses mamp php version ./configure make then I moved compile zip.so to extensions/no-debug-non-zts-20060613. When MAMP is launched it returns this error [11-Apr-2010 16:33:27] PHP Warning: PHP Startup: zip: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 Can somebody explain to me how to do this the right way.

    Read the article

  • Linux says a kernel module has an unknown symbol, but another loaded module provides it.

    - by raldi
    I'm trying to install a driver for a USB DAQ box, which annoyingly, I have to compile myself. I believe I've succeeded -- I have two .ko files: -rw-r--r-- 1 root root 45271 2010-03-18 21:24 advdrv_core.ko -rw-r--r-- 1 root root 24312 2010-03-18 21:24 usb4761.ko I was able to run insmod on the first without incident, but when I try on the second, I get a flood of messages: kernel: [686782.106547] usb4761: no symbol version for adv_process_info_check_event kernel: [686782.106555] usb4761: Unknown symbol adv_process_info_check_event kernel: [686782.106691] usb4761: no symbol version for advdrv_unregister_driver kernel: [686782.106695] usb4761: Unknown symbol advdrv_unregister_driver However, advdrv_core.ko provides these symbols. My kernel sure seems to have them in memory: # cat /proc/kallsyms | grep advdrv_unregister_driver f8d88504 r __ksymtab_advdrv_unregister_driver [advdrv_core] f8d888d2 r __kstrtab_advdrv_unregister_driver [advdrv_core] f8d885a4 r __kcrctab_advdrv_unregister_driver [advdrv_core] 086eb8fb a __crc_advdrv_unregister_driver [advdrv_core] f8d86e90 t advdrv_unregister_driver [advdrv_core] Why does my insmod claim they're unknown symbols?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >