Search Results

Search found 9410 results on 377 pages for 'simulator difference'.

Page 261/377 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • iPhone Debugger Message -- Weird

    - by Bill Shiff
    Hello, I have an iPhone app that I've been working on and have recently upgraded my version of XCode. Since the upgrade, I can build and debug in the iPhone Simulator just fine, but when I try to debug on an attached device I get the following messages: From Xcode4: GNU gdb 6.3.50-20050815 (Apple version gdb-1510) (Fri Oct 22 04:12:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 sharedlibrary apply-load-rules all warning: Unable to read symbols from "dyld" (prefix __dyld_) (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MessageUI.framework/MessageUI (file not found). warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MapKit.framework/MapKit (file not found). warning: Unable to read symbols from "MapKit" (not yet mapped into memory). warning: Unable to read symbols from "Foundation" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/UIKit.framework/UIKit (file not found). warning: Unable to read symbols from "UIKit" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/CoreGraphics.framework/CoreGraphics (file not found). warning: Unable to read symbols from "CoreGraphics" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). warning: Unable to read symbols from "QuartzCore" (not yet mapped into memory). warning: Unable to read symbols from "libgcc_s.1.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libSystem.B.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libobjc.A.dylib" (not yet mapped into memory). warning: Unable to read symbols from "CoreFoundation" (not yet mapped into memory). target remote-mobile /tmp/.XcodeGDBRemote-3836-28 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none [Switching to thread 11523] [Switching to thread 11523] gdb stack crawl at point of internal error: 0 gdb-arm-apple-darwin 0x0013216e internal_vproblem + 316

    Read the article

  • Active record taking Date.today as yesterday

    - by Mongus Pong
    I have a strange one.. I am doing something like : tip = find (:first, :conditions => ["last_shown = ? or last_shown is null", Date.today]) And then a little later on I do : tip.last_shown = Date.today tip.save When I look at output of these queries, ActiveRecord is doing the first query with todays date as I would expect. However, the send query, ActiveRecord is setting the last_shown date to be yesterdays date. Why on earth would it do this? I have config.time_zone = 'UTC' in my environment.rb. I can use Time.now.utc.to_date instead of Date.today but it makes no difference.

    Read the article

  • call multiple c++ functions in python using threads

    - by wiso
    Suppose I have a C(++) function taking an integer, and it is bound to (C)python with python api, so I can call it from python: import c_module c_module.f(10) now, I want to parallelize it. The problem is: how does the GIL work in this case? Suppose I have a queue of numbers to be processed, and some workers (threading.Thread) working in parallel, each of them calling c_module.f(number) where number is taken from a queue. The difference with the usual case, when GIL lock the interpreter, is that now you don't need the interpreter to evaluate c_module.f because it is compiled. So the question is: in this case the processing is really parallel?

    Read the article

  • Apache modules: C module vs mod_wsgi python module - Performance

    - by Gopal
    Hi A client of ours is asking us to implement a module in C in Apache webserver for performance reasons. This module should handle RESTful uri's, access a database and return results in json format. Many people here have recommended python mod_wsgi instead - but for simplicity of programming reasons. Can anyone tell me if there is a significant difference in performance between the mod_wsgi python solution vs. the Apache + C.module. Any anecdotes? Pointers to some study posted online?

    Read the article

  • Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?

    - by Royi Namir
    (the following items has different goals , but im interesting knowing how they "PAUSEd") questions Thread.sleep - Does it impact performance on a system ?does it tie up a thread with its wait ? what about Monitor.Wait ? what is the difference in the way they "wait"? do they tie up a thread with their wait ? what aboutRegisteredWaitHandle ? This method accepts a delegate that is executed when a wait handle is signaled. While it’s waiting, it doesn’t tie up a thread. so some thread are paused and can be woken by a delegate , while others just wait ? spin ? can someone please make things clearer ? edit http://www.albahari.com/threading/part2.aspx

    Read the article

  • MySQL JDBC date issues with database server in different timezone

    - by Somatik
    I have a database server in "Europe/London" time zone and my web server in "Europe/Brussels". Since it is summer time now my application server has a 2 hour difference. I created a test to reproduce my issue: Query q = JPA.em().createNativeQuery("SELECT UNIX_TIMESTAMP(startDateTime) FROM `Event` WHERE `id` =574"); BigInteger unix = (BigInteger) q.getSingleResult(); System.out.println(unix + "000 UNIX_TIMESTAMP to BigInteger"); Query q2 = JPA.em().createNativeQuery("SELECT startDateTime FROM `Event` WHERE `id` =574"); Timestamp o = (Timestamp) q2.getSingleResult(); System.out.println(o.getTime() + " Timestamp"); The startDateTime column is defined as 'datetime' (but same issue with 'timestamp') The output I am getting is this: 1340291591000 UNIX_TIMESTAMP to BigInteger 1340284391000 Timestamp Reading java date objects results in a shift in time zone, how do I fix this? I would expect the jdbc driver to just set the "unix time" value it gets from the server in the Date object. (a proper solution should work with any timezone combination, not only for db in GMT)

    Read the article

  • How to implement a syndication receiver? (multi-client / single server)

    - by LeonixSolutions
    I have to come up with a system architecture. A few hundred remote devices will be communicating over internet with a central server which will receive data and store it in a database. I could write my own TCP/IP based protocol use SOAP use AJAX use RSS anything else? This is currently seen as one way (telemetry, as opposed to SCADA). Would it make a difference if we make it bi-directional. There are no plans to do so, but Murphy's law makes me wary of a uni-directional solution (on the data plane; I imagine that the control plane is bi-directional in all solutions (?)). I hope that this is not too subjective. I would like a solution which is quick and easy to implement and for others to support and where the general "communications pipeline" from remote deceives to database server can be re-used as the core of future projects. I have a strong background in telecomms protocols, in C/C++ and PHP.

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • Detailed change history of .NET framework versions?

    - by gehho
    I am looking for a detailed change history (including bugfixes) of all .NET framework versions, especially the changes between 2.0 and 3.5 SP1. I know that something like that exists for v2.0 and v1.1, and for v4.0. However, I could not find a history for v3.0 and v3.5/SP1. Background: (slightly edited) We are having issues somewhere between deserialization of some XML data (using XmlReader) and the display of the data in the UI. These problems appear when we use .NET 3.5 SP1, but we did not have them in v2.0. Now, I would like to know if this is related to some change/bugfix in the framework, or if this is related to some other difference. Unfortunately, we do not have the source code of that piece of software, and most of the software is written using native C++/MFC, except for the deserialization part which is .NET.

    Read the article

  • Can I Always debug multiple instances of a same object that is of type thread with GDB?

    - by yan bellavance
    program runs fine. When I put a breakpoint a segmentation fault is generated. Is it me or GDB? At run time this never happens and if I instantiate only one object then no problems. Im using QtCreator on ubuntu x86_64 karmic koala. UPDATE1: I have made a small program containing a simplified version of that class. You can download it at: example program simply put a breakpoint on the first line of the function called drawChart() and step into to see the segfault happen UPDATE2: This is another small program but it is practically the same as the mandlebrot example and it is still happening. You can diff it with mandlebrot to see the small difference. almost the same as mandlebrot example program

    Read the article

  • Troubles with DateTime and PHP 5.2

    - by Nate Wagar
    I'm attempting to use the PHP DateTime class on a server with PHP 5.2.6, with a test server running PHP 5.3. Here is the code: <?php try { $dt = new DateTime('2009-10-08'); $dt->setDate(2009,10,8); print_r($dt); }catch (Exception $e) { echo $e->getMessage(); } On the test server, things work flawlessly, this is what prints: DateTime Object ( [date] => 2009-10-08 00:00:00 [timezone_type] => 3 [timezone] => America/New_York ) On the server I need to use it on, however, this is what prints: DateTime Object ( ) Removing the setDate makes no difference whatsoever. Any ideas why this might be happening? Thanks!

    Read the article

  • Visual Studio 2005 - VC++ compiler C1001 on Windows 7

    - by Fritz H
    When I try to build a simple "Hello World" C++ app on Windows 7 Beta, using Visual Studio 2005 (VC++2005) I get a rather generic error C1001 error (Internal compiler error) The compiler seems to just crash, and Windows pops up its (un)helpful This program has stopped working dialog. The file it complains about is mcp1.cpp. Has anyone come across this before? Cheers, Fritz EDIT: The code is: #include <iostream> int main(int argc, char** argv) { std::cout << "Hello!"; return 0; } EDIT 2: I have installed SP1 as well as SP1 for Vista. VS popped up a warning saying it needs SP1 for Vista, but installing it makes no difference. No ideas about what I can possibly do to fix this?

    Read the article

  • Passing null as a param for replace() in javascript behaving weird?

    - by Babiker
    I have the follwing jQuery: $("#textArea").keyup(function(){ var textAreaValue = $("textArea"); if(!textArea.value.indexOf("some string")){ textArea.value = textArea.value.replace("some string",null); alert("It was there!"); } }); Is it normal for element.value.replace("some string",null); to replace "some string" with "null"as a string? And if normal can you please explain why? I have tested it with element.value.replace("some string",""), and that works fine, so what would be the difference between null and ""? Thanks in advance.

    Read the article

  • Massive speed diff in upgrade to Java 7

    - by Brett Rigby
    We use Java within our build process, as it is used to resolve/publish our dependencies via Ivy. No problem, nor have we had with it for 2 years, until we've tried to upgrade Java 6 Update 26 to Version 7 Update 7, whereas a build on a local developer PC (WinXP) now takes 2 hours to complete, instead of 10 minutes!! Nothing else has changed on the PC, making it the absolute target for our concerns. Does anyone know of any reason as to why version 7 of Java would make such a speed difference like this?

    Read the article

  • Differences between query using LINQ and IQueryable interface directly?

    - by JohnMetta
    Using Entity Framework 4, and given: ObjectSet<Thing> AllThings = Context.CreateObjectSet<Thing>; public IQueryable<Thing> ByNameA(String name) { IQueryable<Thing> query = from o in AllThings where o.Name == name select o; return query; } public IQueryable<Thing> ByNameB(String name) { return AllThings.Where((o) => o.Name == name); } Both return IQueryable< instances, and thus the query doesn't hit the server until something like ToList() is called, right? Is it purely readability that is the difference, or are the using fundamentally different technologies on the back-end?

    Read the article

  • Comparison of collection datatypes in C#

    - by Joel in Gö
    Does anyone know of a good overview of the different C# collection types? I am looking for something showing which basic operations such as Add, Remove, RemoveLast etc. are supported, and giving the relative performance. It would be particularly interesting for the various generic classes - and even better if it showed eg. if there is a difference in performance between a List<T> where T is a class and one where T is a struct. A start would be a nice cheat-sheet for the abstract data structures, comparing Linked Lists, Hash Tables etc. etc. Thanks!

    Read the article

  • IPHONE DEVELOPMENT PROFILE EXPIRED - I TRIED EVERYTHING AND YES, I READ THE DOCS

    - by theiphoneguy
    I really combed this site and others. I read and re-read the related links here and the Apple docs. I'm sorry, but either I am obviously missing something right under my nose, or this Apple profile/certificate stuff is a bit convoluted. Here it is: I have a product in the App Store. I have updated it several times and users like it. My development profile recently expired just when I was improving the app for its next release. I can run the app in the simulator. I can compile and put the distribution build on my iPhone just fine. I went to the Apple portal and renewed the development profile. I downloaded it and installed it in Xcode. I see it in the Organize window. I see it on my iPhone. I CANNOT put the debug build on my iPhone to debug or run with Instruments. The message is that either there is not a valid signed profile or it is untrusted. I subsequently tried to download and install the certificate to my Mac's keychain. Still no success. I checked the code signing section of Project settings and also for the target and the root. All appears to indicate that it is using the expected development profile for debug. Yes, I had deleted the old profile from my iPhone, from the Organizer. I cleaned the Xcode cache and all targets. I have done all of this several times and in varying sequences to try to cover every possibility. I am ready to do anything to be able to debug with Instruments in order to check for leaks or high memory usage. Even though the distribution compile runs fine on my iPhone and plays well with other running processes, I will not release anything without a leaks/memory test. Any ideas will be appreciated. If I missed something obvious, please forgive me - it was not due to just posting a question without searching for similar postings. Thanks!

    Read the article

  • Configuring DSVL to show camera images from Logitech9000 webcam

    - by curryage
    I am trying to view camera feed from a Logitech9000 camera using DSVL(DirectShow Video Library) http://sourceforge.net/projects/dsvideolib/. The xml file currently looks as below: <?xml version="1.0" encoding="UTF-8"?> <dsvl_input xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="C:\Documents and Settings\Thomas\My Documents\projects\ARToolKit &amp; DSVideoLib\ARVideoLib\DsVideoLib\DsVideoLib.xsd"> <camera show_format_dialog="false" friendly_name="Logitech Webcam Pro 9000"> <pixel_format> <RGB24 flip_v="false"/> </pixel_format> </camera> </dsvl_input> However, the image that comes up looks vertically inverted. I tried changing the flip_v value to true in the config above but it did not make any difference. Any suggestions?

    Read the article

  • Crystal Report Function for converting Seconds to Timespan format.

    - by arakkots
    I have a crystal report where it shows the Agent's activities throughout the day with a pie chart. In the details section it is displaying: Activity [string] StartedAt [DateTime] EndedAt [DateTime] Duration [The difference between EndedAt and StartedAt in seconds - Integer] Report data is GroupedBy Activity and summarized by Duration. Currently Duration is shown in seconds but I need to format it 02h:30m:22s:15ms. For that I wrote a custom function in Crystal Report in the Formula Workshop editor as follows, but it looks like the syntax is not right (Error message on keyword Long: "A variable type (for example, 'String') is missing."). Can someone help? Function GetTimeSpanString(seconds as Long) Dim ts As TimeSpan = TimeSpan.FromSeconds( seconds ); GetTimeSpan = string.Format("{0:D2}h:{1:D2}m:{2:D2}s:{3:D3}ms", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds) End Function

    Read the article

  • How do I find useful code previously deleted but still stored in source control?

    - by sharptooth
    Whenever someone asks what to do with code that is no longer needed the answer is usually "delete it, restore it from source control if you need it back". Now how do I find that piece of source code in the repository? Let's limit scope to SVN for simplicity - I suspect that using any other source control system will not make much difference in this aspect (correct me if I'm wrong). If I delete that code and commit the changes it will no longer be in the latest revision. How do I find it without exporting each revision and searching thoroughly (which is nearly impossible)?

    Read the article

  • C2360 compiler error on TFS build, but not on desktop

    - by pdmaguire
    A c++ code snippet similar to the code below caused our TFS build to fail with a C2360 compiler error. switch (i) { case 0 : for each (int n in a) System::Console::WriteLine(n.ToString()); break; case 1 : System::Console::WriteLine("n is not in scope here"); break; } This is fixed by using {} brackets within the body of case 0, as below: switch (i) { case 0 : { for each (int n in a) System::Console::WriteLine(n.ToString()); } break; case 1 : System::Console::WriteLine("n is not in scope here"); break; } The developer had successfully compiled the code on their desktop before committing the changes. A cursory look at versions of things like compilers, Visual Studio etc on the server and desktop suggest they are the same. The source code is the same, obviously. What is the difference between a desktop build and TFS build that would smother a compiler error like this?

    Read the article

  • Determine when using the VC90 compiler in VS2010 instead of VS2008?

    - by Dan
    Is there a (Microsoft-specific) CPP macro to determine when I'm using the VC9 compiler in Visual Studio 2010 as opposed to Visual Studio 2008? _MSC_VER returns the compiler version, so with VS2010 multi-targeting feature, I'll get the same result as with VS2008. The reason for wanting to know the difference is that I created a new VS2010 project which contains code removed from a larger project. I just left the VS2008 stuff "as is" since we're moving away from VS2008 "soon" anyway and I didn't want to go through the hassle of creating a vcproj file along with the new vcxproj. For now, I've just defined my own macro to indicate whether the code is compiled into its own DLL or not; it works just fine, but it would be nice if there were something slightly more elegant.

    Read the article

  • Is block style really this important?

    - by Jack Roscoe
    I just watched a video of Douglas Crockford's presentation about his 2009 book JavaScript: The Good Parts. In the video, he explains that the following block is dangerous because it produces silent errors: return { ok: false }; And that it should actually be written like this (emphasising that although seemingly identical the behavioural difference is crucial): return { ok: false }; You can see his comments around 32 minutes into the video here: http://www.youtube.com/watch?v=hQVTIJBZook&feature=player_embedded#!&start=1920 I have not heard this before, and was wondering if this rule still applies or if this requirement in syntax has been overcome by JavaScript developments since this statement was made. I found this very interesting as I have NOT been writing my code this way, and wanted to check that this information was not out of date.

    Read the article

  • "IronPython + .NET" vs "Python + PyQt". Which one is better for Windows App development?

    - by Patrick.L
    Hi, I'm new in using Python. I would like to develop Windows GUI Application using Python. After some research, I found that I have 2 options:- IronPython + .NET Framework Python + PyQt May I know which one is better for Windows Application development? Which option has more features (e.g. database support, etc)? Other than the .NET support, is there any big difference between IronPython and Python? Which one is a better choice for me? Thank you. Patrick.L

    Read the article

  • Ajax Post Request Returns JSON but Deferred Fails

    - by imrane
    I have a cross-domain POST request to http://api.local/user/auth - my API endpoint. I allow Cross Domain requests in my api with CORS. Using Chrome if that makes a difference. I get a valid server JSON response with 200 Status Code but I am using deferreds from a backbone model like so: @model.save() .fail(-> console.log 'sync fail') .success -> console.log 'sync OK' And I consistently get a 'sync fail' instead of the expected 'sync OK' Thoughts?

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >