Search Results

Search found 10844 results on 434 pages for 'device mapper'.

Page 118/434 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • How do I use the gravity vector to correctly transform scene for augmented reality?

    - by gpdawson
    I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass). The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project. Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes? //Clear matrix to be used to rotate from the current referential to one based on the gravity vector bzero(matrix, sizeof(matrix)); matrix[3][3] = 1.0; //Setup first matrix column as gravity vector matrix[0][0] = accel[0] / length; matrix[0][1] = accel[1] / length; matrix[0][2] = accel[2] / length; //Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1 matrix[1][0] = 0.0; matrix[1][1] = 1.0; matrix[1][2] = -accel[1] / accel[2]; length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]); matrix[1][0] /= length; matrix[1][1] /= length; matrix[1][2] /= length; //Setup third matrix column as the cross product of the first two matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1]; matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0]; matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0]; //Finally load matrix glMultMatrixf((GLfloat*)matrix);

    Read the article

  • Interfacing Android Nexus One with Arduino + BlueSmirf

    - by efgomez
    I'm a bit new to all of this, so bear with me - I'd really appreciate your help. I am trying to link the Android Nexus One with an arduino (Duemilanove) that is connected to a BlueSmirf. I have a program that is simply outputting the string "Hello Bluetooth" to whatever device the BlueSmirf is connected to. Here is the Arduino program: void setup(){ Serial.begin(115200); int i; } void loop(){Serial.print("Hello Bluetooth!"); delay(1000); } One my computer BT terminal I can see the message and connect no problem. The trouble is with my android code. I can connect to the device with android, but when I look at the log it is not displaying "Hello Bluetooth". Here is the debug log: 04-09 16:27:49.022: ERROR/BTArduino(17288): FireFly-2583 connected 04-09 16:27:49.022: ERROR/BTArduino(17288): STARTING TO CONNECT THE SOCKET 04-09 16:27:55.705: ERROR/BTArduino(17288): Received: 16 04-09 16:27:56.702: ERROR/BTArduino(17288): Received: 1 04-09 16:27:56.712: ERROR/BTArduino(17288): Received: 15 04-09 16:27:57.702: ERROR/BTArduino(17288): Received: 1 04-09 16:27:57.702: ERROR/BTArduino(17288): Received: 15 04-09 16:27:58.704: ERROR/BTArduino(17288): Received: 1 04-09 16:27:58.704: ERROR/BTArduino(17288): Received: 15 ect... Here is the code, I'm trying to put only the relative code but if you need more please let me know: private class ConnectThread extends Thread { private final BluetoothSocket mySocket; private final BluetoothDevice myDevice; public ConnectThread(BluetoothDevice device) { myDevice = device; BluetoothSocket tmp = null; try { tmp = device.createRfcommSocketToServiceRecord(MY_UUID); } catch (IOException e) { Log.e(TAG, "CONNECTION IN THREAD DIDNT WORK"); } mySocket = tmp; } public void run() { Log.e(TAG, "STARTING TO CONNECT THE SOCKET"); InputStream inStream = null; boolean run = false; //...More Connection code here... The more relative code is here: byte[] buffer = new byte[1024]; int bytes; // handle Connection try { inStream = mySocket.getInputStream(); while (run) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + bytes); } catch (IOException e3) { Log.e(TAG, "disconnected"); } } I am reading bytes = inStream.read(buffer). I know bytes is an integer, so I tried sending integers over bluetooth because "bytes" was an integer but it still didn't make sense. It almost appears that is sending incorrect baud rate. Could this be true? Any help would be appreciated. Thank you very much.

    Read the article

  • When constructing a Bitmap with Bitmap.FromHbitmap(), how soon can the original bitmap handle be del

    - by GBegen
    From the documentation of Image.FromHbitmap() at http://msdn.microsoft.com/en-us/library/k061we7x%28VS.80%29.aspx : The FromHbitmap method makes a copy of the GDI bitmap; so you can release the incoming GDI bitmap using the GDIDeleteObject method immediately after creating the new Image. This pretty explicitly states that the bitmap handle can be immediately deleted with DeleteObject as soon as the Bitmap instance is created. Looking at the implementation of Image.FromHbitmap() with Reflector, however, shows that it is a pretty thin wrapper around the GDI+ function, GdipCreateBitmapFromHBITMAP(). There is pretty scant documentation on the GDI+ flat functions, but http://msdn.microsoft.com/en-us/library/ms533971%28VS.85%29.aspx says that GdipCreateBitmapFromHBITMAP() corresponds to the Bitmap::Bitmap() constructor that takes an HBITMAP and an HPALETTE as parameters. The documentation for this version of the Bitmap::Bitmap() constructor at http://msdn.microsoft.com/en-us/library/ms536314%28VS.85%29.aspx has this to say: You are responsible for deleting the GDI bitmap and the GDI palette. However, you should not delete the GDI bitmap or the GDI palette until after the GDI+ Bitmap::Bitmap object is deleted or goes out of scope. Do not pass to the GDI+ Bitmap::Bitmap constructor a GDI bitmap or a GDI palette that is currently (or was previously) selected into a device context. Furthermore, one can see the source code for the C++ portion of GDI+ in GdiPlusBitmap.h that the Bitmap::Bitmap() constructor in question is itself a wrapper for the GdipCreateBitmapFromHBITMAP() function from the flat API: inline Bitmap::Bitmap( IN HBITMAP hbm, IN HPALETTE hpal ) { GpBitmap *bitmap = NULL; lastResult = DllExports::GdipCreateBitmapFromHBITMAP(hbm, hpal, &bitmap); SetNativeImage(bitmap); } What I can't easily see is the implementation of GdipCreateBitmapFromHBITMAP() that is the core of this functionality, but the two remarks in the documentation seem to be contradictory. The .Net documentation says I can delete the bitmap handle immediately, and the GDI+ documentation says the bitmap handle must be kept until the wrapping object is deleted, but both are based on the same GDI+ function. Furthermore, the GDI+ documentation warns against using a source HBITMAP that is currently or previously selected into a device context. While I can understand why the bitmap should not be selected into a device context currently, I do not understand why there is a warning against using a bitmap that was previously selected into a device context. That would seem to prevent use of GDI+ bitmaps that had been created in memory using standard GDI. So, in summary: Does the original bitmap handle need to be kept around until the .Net Bitmap object is disposed? Does the GDI+ function, GdipCreateBitmapFromHBITMAP(), make a copy of the source bitmap or merely hold onto the handle to the original? Why should I not use an HBITMAP that was previously selected into a device context?

    Read the article

  • Automapper Type Converter from String to IEnumerable<String> is not being called

    - by Anton
    Here is my custom Type Converter. public class StringListTypeConverter : TypeConverter<String, IEnumerable<String>> { protected override IEnumerable<string> ConvertCore(String source) { if (source == null) yield break; foreach (var item in source.Split(',')) yield return item.Trim(); } } public class Source { public String Some {get;set;} } public class Dest { public IEnumerable<String> Some {get;set;} } // ... configuration Mapper.CreateMap<String, IEnumerable<String>>().ConvertUsing<StringListTypeConverter>(); Mapper.CreateMap<Source, Dest>(); The problem: StringListTypeConverter is not being called at all. Dest.Some == null.

    Read the article

  • PyQt error: No such signal QObject::dataChanged

    - by DSblizzard
    PyQt application works fine, but when I close it Python shows this message: "Object::connect: No such signal QObject::dataChanged(QModelIndex,QModelIndex)" What is the cause of this? There isn't dataChanged signal in the program. EDIT: Almost minimal program which causes error: import sys from PyQt4.QtCore import * from PyQt4.QtGui import * from PyQt4.QtSql import * import ui_DBMainWindow global Mw, Table, Tv Id, Name, Size = range(3) class TTable(): pass Table = TTable() class TMainWindow(QMainWindow, ui_DBMainWindow.Ui_MainWindow): def __init__(self, parent = None): global Table QMainWindow.__init__(self, parent) self.setupUi(self) self.showMaximized() self.mapper = QDataWidgetMapper(self) self.mapper.setModel(Table.Model) def main(): global Mw, Table, Tv QApp = QApplication(sys.argv) DB = QSqlDatabase.addDatabase("QSQLITE") DB.setDatabaseName("1.db") Table.Model = QSqlTableModel() Table.Model.setTable("MainTable") Table.Model.select() Mw = TMainWindow() QApp.exec_() if __name__ == "__main__": main()

    Read the article

  • Json Jackson deserialization without inner classes

    - by Eto Demerzel
    Hi everyone, I have a question concerning Json deserialization using Jackson. I would like to deserialize a Json file using a class like this one: (taken from http://wiki.fasterxml.com/JacksonInFiveMinutes) public class User { public enum Gender { MALE, FEMALE }; public static class Name { private String _first, _last; public String getFirst() { return _first; } public String getLast() { return _last; } public void setFirst(String s) { _first = s; } public void setLast(String s) { _last = s; } } private Gender _gender; private Name _name; private boolean _isVerified; private byte[] _userImage; public Name getName() { return _name; } public boolean isVerified() { return _isVerified; } public Gender getGender() { return _gender; } public byte[] getUserImage() { return _userImage; } public void setName(Name n) { _name = n; } public void setVerified(boolean b) { _isVerified = b; } public void setGender(Gender g) { _gender = g; } public void setUserImage(byte[] b) { _userImage = b; } } A Json file can be deserialized using the so called "Full Data Binding" in this way: ObjectMapper mapper = new ObjectMapper(); User user = mapper.readValue(new File("user.json"), User.class); My problem is the usage of the inner class "Name". I would like to do the same thing without using inner classes. The "User" class would became like that: import Name; import Gender; public class User { private Gender _gender; private Name _name; private boolean _isVerified; private byte[] _userImage; public Name getName() { return _name; } public boolean isVerified() { return _isVerified; } public Gender getGender() { return _gender; } public byte[] getUserImage() { return _userImage; } public void setName(Name n) { _name = n; } public void setVerified(boolean b) { _isVerified = b; } public void setGender(Gender g) { _gender = g; } public void setUserImage(byte[] b) { _userImage = b; } } This means to find a way to specify to the mapper all the required classes in order to perform the deserialization. Is this possible? I looked at the documentation but I cannot find any solution. My need comes from the fact that I use the Javassist library to create such classes, and it does not support inner or anonymous classes. Thank you in advance

    Read the article

  • Accessing a Service from within an XNA Content Pipeline Extension

    - by David Wallace
    I need to allow my content pipeline extension to use a pattern similar to a factory. I start with a dictionary type: public delegate T Mapper<T>(MapFactory<T> mf, XElement d); public class MapFactory<T> { Dictionary<string, Mapper<T>> map = new Dictionary<string, Mapper<T>>(); public void Add(string s, Mapper<T> m) { map.Add(s, m); } public T Get(XElement xe) { if (xe == null) throw new ArgumentNullException( "Invalid document"); var key = xe.Name.ToString(); if (!map.ContainsKey(key)) throw new ArgumentException( key + " is not a valid key."); return map[key](this, xe); } public IEnumerable<T> GetAll(XElement xe) { if (xe == null) throw new ArgumentNullException( "Invalid document"); foreach (var e in xe.Elements()) { var val = e.Name.ToString(); if (map.ContainsKey(val)) yield return map[val](this, e); } } } Here is one type of object I want to store: public partial class TestContent { // Test type public string title; // Once test if true public bool once; // Parameters public Dictionary<string, object> args; public TestContent() { title = string.Empty; args = new Dictionary<string, object>(); } public TestContent(XElement xe) { title = xe.Name.ToString(); args = new Dictionary<string, object>(); xe.ParseAttribute("once", once); } } XElement.ParseAttribute is an extension method that works as one might expect. It returns a boolean that is true if successful. The issue is that I have many different types of tests, each of which populates the object in a way unique to the specific test. The element name is the key to MapFactory's dictionary. This type of test, while atypical, illustrates my problem. public class LogicTest : TestBase { string opkey; List<TestBase> items; public override bool Test(BehaviorArgs args) { if (items == null) return false; if (items.Count == 0) return false; bool result = items[0].Test(args); for (int i = 1; i < items.Count; i++) { bool other = items[i].Test(args); switch (opkey) { case "And": result &= other; if (!result) return false; break; case "Or": result |= other; if (result) return true; break; case "Xor": result ^= other; break; case "Nand": result = !(result & other); break; case "Nor": result = !(result | other); break; default: result = false; break; } } return result; } public static TestContent Build(MapFactory<TestContent> mf, XElement xe) { var result = new TestContent(xe); string key = "Or"; xe.GetAttribute("op", key); result.args.Add("key", key); var names = mf.GetAll(xe).ToList(); if (names.Count() < 2) throw new ArgumentException( "LogicTest requires at least two entries."); result.args.Add("items", names); return result; } } My actual code is more involved as the factory has two dictionaries, one that turns an XElement into a content type to write and another used by the reader to create the actual game objects. I need to build these factories in code because they map strings to delegates. I have a service that contains several of these factories. The mission is to make these factory classes available to a content processor. Neither the processor itself nor the context it uses as a parameter have any known hooks to attach an IServiceProvider or equivalent. Any ideas?

    Read the article

  • AutoMapper and SecurityException in IIS

    - by Felipe
    Hi everybody... I'm developing a asp.net mvc application with nhibernate and I would not like to expose my objects mappings with NHibernate, so I created DTO for each entity and I'm trying to convert my Domain objects to DTO and send it to View. So I have in my sollution: ClassLibrary with my Domain (for NHibernate) and DTO objetcs Class library to make a SessionFactory adn Factories in my Project Asp.Net MVC 2 Application So, I download AutoMapper to transform Domain objects in DTO and add a the code to do this in Application_Start of global.asax. When I run in VisualStudio (by pressing F5) it works fine and my dtos are into the view, So when I publish this in IIS, I get a security exception =( in first line of conversion: Mapper.CreateMap(); <--- this line throw exception Mapper.CreateMap(); System.Security.SecurityException: Failed request for the permission of type 'System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. What can I do to resolve this to work in IIS ? When I will publish it on web server, the error will get too :( Thanks Cheers

    Read the article

  • Linux: How to find all serial devices (ttyS, ttyUSB, ..) without opening them?

    - by Thomas Tempelmann
    What is the proper way to get a list of all available serial ports/devices on a Linux system? In other words, when I iterate over all devices in /dev/, how do I tell which ones are serial ports in the classic way, i.e. those usually supporting baud rates and RTS/CTS flow control? The solution would be coded in C. I ask because I am using a 3rd party library that does this clearly wrong: It appears to only iterate over /dev/ttyS*. The problem is that there are, for instance, serial ports over USB (provided by USB-RS232 adapters), and those are listed under /dev/ttyUSB*. And reading the Serial-HOWTO at Linux.org, I get the idea that there'll be other name spaces as well, as time comes. So I need to find the official way to detect serial devices. Problem is that there appears none documented, or I can't find it. I imagine one way would be to open all files from /dev/tty* and call a specific ioctl() on them that is only available on serial devices. Would that be a good solution, though? Update hrickards suggested to look at the source for "setserial". Its code does exactly what I had in mind: First, it opens a device with: fd = open (path, O_RDWR | O_NONBLOCK) Then it invokes: ioctl (fd, TIOCGSERIAL, &serinfo) If that call returns no error, then it's a serial dev, apparently. I found similar code here, which suggested to also add the O_NOCTTY option. There is one problem with this approach, though: When I tested this code on BSD Unix (i.e. OSX), it worked as well, however serial devices that are provided thru Bluetooth cause the system (driver) to try to connect to the bluetooth device, which takes a while before it'll return with a timeout error. This is caused by just opening the device. And I can imagine that similar things can happen on Linux as well - ideally, I should not need to open the device to figure out its type. I wonder if there's also a way to invoke ioctl functions without an open, or open a device in a way that it does not cause connections to be made? Any ideas?

    Read the article

  • How do I set a default page in Pylons?

    - by Evgeny
    I've created a new Pylons application and added a controller ("main.py") with a template ("index.mako"). Now the URL http://myserver/main/index works. How do I make this the default page, ie. the one returned when I browse to http://myserver/ ? I've already added a default route in routing.py: def make_map(): """Create, configure and return the routes Mapper""" map = Mapper(directory=config['pylons.paths']['controllers'], always_scan=config['debug']) map.minimization = False # The ErrorController route (handles 404/500 error pages); it should # likely stay at the top, ensuring it can always be resolved map.connect('/error/{action}', controller='error') map.connect('/error/{action}/{id}', controller='error') # CUSTOM ROUTES HERE map.connect('', controller='main', action='index') map.connect('/{controller}/{action}') map.connect('/{controller}/{action}/{id}') return map I've also deleted the contents of the public directory (except for favicon.ico), following the answer to http://stackoverflow.com/questions/1279403/default-route-doesnt-work Now I just get error 404. What else do I need to do to get such a basic thing to work?

    Read the article

  • ./configure : /bin/sh^M : bad interpreter

    - by Vineeth
    Hello there, I've been trying to install lpng142 on my fed 12 system. Seems like a problem to me. I get this error [root@localhost lpng142]# ./configure bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory [root@localhost lpng142]# How do I fix this? and for more details, I shall include the /etc/fstub file details here # # /etc/fstab # Created by anaconda on Wed May 26 18:12:05 2010 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=ce67cf79-22c3-45d4-8374-bd0075617cc8 /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost etc]# Help, please

    Read the article

  • Parse simple JSON with Jackson

    - by siik
    Here is my JSON: { "i": 53691, "s": "Something" } Here is my model: public class Test() { private int i; private String s; public setInt(int i){ this.i = i; } public setString(String s){ this.s = s; } // getters here } Here is my class for server's response: public class ServerResponse(){ private Test; public void setTest(Test test){ this.test = test;} public Test getTest(){ return Test; } } When I do: ObjectMapper mapper = new ObjectMapper(); mapper.readValue(json, serverResponse); I'm getting an exception like: JsonProcessingException: Unrecognized field "i" (Class MyClass), not marked as ignorable Please advice.

    Read the article

  • security roles in grails portlets

    - by srinath
    Hi, How to include security roles in grails portlets for liferay ? After deploying war in tomcat i added manually these lines for roles liferay-portlet.xml : <role-mapper> <role-name>administrator</role-name> <role-link>Administrator</role-link> </role-mapper> portlet.xml : <security-role-ref> <role-name>administrator</role-name> </security-role-ref> But How to add these roles settings in grails app before creating war ?? Please suggest me . thanks in advance sri..

    Read the article

  • iPhone current user location coordinates showing as (0,0)

    - by ennuikiller
    I'm trying to get the users current latitude and longitude with this viewDidLoad method. The resulting map is correctly indicating the current location however the NSLog consistently shows: 2009-09-19 16:45:29.765 Mapper[671:207] user latitude = 0.000000 2009-09-19 16:45:29.772 Mapper[671:207] user longitude = 0.000000 Anyone know what I am missing here? Thanks in advance for your help! - (void)viewDidLoad { [super viewDidLoad]; [mapView setMapType:MKMapTypeStandard]; [mapView setZoomEnabled:YES]; [mapView setScrollEnabled:YES]; [mapView setShowsUserLocation:YES]; CLLocation *userLoc = mapView.userLocation.location; CLLocationCoordinate2D userCoordinate = userLoc.coordinate; NSLog(@"user latitude = %f",userCoordinate.latitude); NSLog(@"user longitude = %f",userCoordinate.longitude); }

    Read the article

  • Dependency Injection and Unit of Work pattern

    - by sunwukung
    I have a dilemma. I've used DI (read: factory) to provide core components for a homebrew ORM. The container provides database connections, DAO's,Mappers and their resultant Domain Objects on request. Here's a basic outline of the Mappers and Domain Object classes class Mapper{ public function __constructor($DAO){ $this->DAO = $DAO; } public function load($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; $values = $this->DAO->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); return $Object; } } class User(){ public function setName($string){ $this->name = $string; //mark modified by means fair or foul } } The ORM also contains a class (Monitor) based on the Unit of Work pattern i.e. class Monitor(){ private static array modified; private static array dirty; public function markClean($class); public function markModified($class); } The ORM class itself simply co-ordinates resources extracted from the DI container. So, to instantiate a new User object: $Container = new DI_Container; $ORM = new ORM($Container); $User = $ORM->load('user',1); //at this point the container instantiates a mapper class //and passes a database connection to it via the constructor //the mapper then takes the second argument and loads the user with that id $User->setName('Rumpelstiltskin');//at this point, User must mark itself as "modified" My question is this. At the point when a user sets values on a Domain Object class, I need to mark the class as "dirty" in the Monitor class. I have one of three options as I can see it 1: Pass an instance of the Monitor class to the Domain Object. I noticed this gets marked as recursive in FirePHP - i.e. $this-Monitor-markModified($this) 2: Instantiate the Monitor directly in the Domain Object - does this break DI? 3: Make the Monitor methods static, and call them from inside the Domain Object - this breaks DI too doesn't it? What would be your recommended course of action (other than use an existing ORM, I'm doing this for fun...)

    Read the article

  • SQLAlchemy automatically converts str to unicode on commit

    - by Victor Stanciu
    Hello, When inserting an object into a database with SQLAlchemy, all it's properties that correspond to String() columns are automatically transformed from <type 'str'> to <type 'unicode'>. Is there a way to prevent this behavior? Here is the code: from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData from sqlalchemy.orm import mapper, sessionmaker engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() table = Table('projects', metadata, Column('id', Integer, primary_key=True), Column('name', String(50)) ) class Project(object): def __init__(self, name): self.name = name mapper(Project, table) metadata.create_all(engine) session = sessionmaker(bind=engine)() project = Project("Lorem ipsum") print(type(project.name)) session.add(project) session.commit() print(type(project.name)) And here is the output: <type 'str'> <type 'unicode'> I know I should probably just work with unicode, but this would involve digging through some third-party code and I don't have the Python skills for that yet :)

    Read the article

  • Use Automapper to flatten sub-class of property

    - by Neil
    Given the classes: public class Person { public string Name { get; set; } } public class Student : Person { public int StudentId { get; set; } } public class Source { public Person Person { get; set; } } public class Dest { public string PersonName { get; set; } public int? PersonStudentId { get; set; } } I want to use Automapper to map Source - Dest. This test obviously fails: Mapper.CreateMap<Source, Dest>(); var source = new Source() { Person = new Student(){ Name = "J", StudentId = 5 }}; var dest = Mapper.Map<Source, Dest>(source); Assert.AreEqual(5, dest.PersonStudentId); What would be the best approach to mapping this given that "Person" is actually a heavily used data-type throughout our domain model.

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Resize the /var directory in redhat enterprise edition 4

    - by Sri
    I am running NDB mysql. the log files fills up the /var directory. therefore i cant start the ndbd service now. as a temporary fix, i have deleted the log files and again working fine. but again the log files fill up the /var directory. i got plenty of space in other partition. therefore i would like to swap the partition from one directory to /var. here if my input from df -h Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 ext3 54G 2.9G 49G 6% / /dev/cciss/c0d0p1 ext3 99M 14M 81M 14% /boot none tmpfs 1013M 0 1013M 0% /dev/shm /dev/cciss/c0d0p2 ext3 9.7G 9.7G 0 100% /var there are plenty of space in /dev/mapper/VolGroup00-LogVol00. Therefore i will like to swap 10 G space from this directory to /var. could you please help me out to solve this problem?

    Read the article

  • Displaying a notification when bluetooth is disconnected - Android

    - by Ryan T
    I am trying to create a program that will display a notification to the user if a Blue tooth device suddenly comes out of range from my Android device. I currently have the following code but no notification is displayed. I was wondering if it was possible I shouldn't use ACTION_ACL_DISCONNECTED because I believe the bluetooth stack would be expecting packets that state a disconnect is requested. My requirements state that the bluetooth device will disconnect without warning. Thank you for any assistance! BluetoothNotification.java: //This is where the notification is created. import android.app.Activity; import android.app.Notification; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.app.Activity; import android.app.Notification; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Context; import android.content.Intent; import android.os.Bundle; public class BluetoothNotification extends Activity { public static final int NOTIFICATION_ID = 1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); /** Define configuration for our notification */ int icon = R.drawable.logo; CharSequence tickerText = "This is a sample notification"; long when = System.currentTimeMillis(); Context context = getApplicationContext(); CharSequence contentTitle = "Sample notification"; CharSequence contentText = "This notification has been generated as a result of BT Disconnecting"; Intent notificationIntent = new Intent(this, BluetoothNotification.class); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); /** Initialize the Notification using the above configuration */ final Notification notification = new Notification(icon, tickerText, when); notification.setLatestEventInfo(context, contentTitle, contentText, contentIntent); /** Retrieve reference from NotificationManager */ String ns = Context.NOTIFICATION_SERVICE; final NotificationManager mNotificationManager = (NotificationManager) getSystemService(ns); mNotificationManager.notify(NOTIFICATION_ID, notification); finish(); } } Snippet from OnCreate: //Located in Controls.java IntentFilter filter1 = new IntentFilter(BluetoothDevice.ACTION_ACL_DISCONNECTED); this.registerReceiver(mReceiver, filter1); Snippet from Controls.java: private final BroadcastReceiver mReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); BluetoothDevice device = intent.getParcelableExtra(BluetoothDevice.EXTRA_DEVICE); if (BluetoothDevice.ACTION_ACL_DISCONNECTED.equals(action)) { //Device has disconnected NotificationManager nm = (NotificationManager) getSystemService(NOTIFICATION_SERVICE); } } };

    Read the article

  • ASP.NET MVC 4/Web API Single Page App for Mobile Devices ... Needs Authentication

    - by lmttag
    We have developed an ASP.NET MVC 4/Web API single page, mobile website (also using jQuery Mobile) that is intended to be accessed only from mobile devices (e.g., iPads, iPhones, Android tables and phones, etc.), not desktop browsers. This mobile website will be hosted internally, like an intranet site. However, since we’re accessing it from mobile devices, we can’t use Windows authentication. We still need to know which user (and their role) is logging in to the mobile website app. We tried simply using ASP.NET’s forms authentication and membership provider, but couldn’t get it working exactly the way we wanted. What we need is for the user to be prompted for a user name and password only on the first time they access the site on their mobile device. After they enter a correct user name and password and have been authenticated once, each subsequent time they access the site they should just go right in. They shouldn’t have to re-enter their credentials (i.e., something needs to be saved locally to each device to identify the user after the first time). This is where we had troubles. Everything worked as expected the first time. That is, the user was prompted to enter a user name and password, and, after doing that, was authenticated and allowed into the site. The problem is every time after the browser was closed on the mobile device, the device and user were not know and the user had to re-enter user name and password. We tried lots of things too. We tried setting persistent cookies in JavaScript. No good. The cookies weren’t there to be read the second time. We tried manually setting persistent cookies from ASP.NET. No good. We, of course, used FormsAuthentication.SetAuthCookie(model.UserName, true); as part of the form authentication framework. No good. We tried using HTML5 local storage. No good. No matter what we tried, if the user was on a mobile device, they would have to log in every single time. (Note: we’ve tried on an iPad and iPhone running both iOS 5.1 and 6.0, with Safari configure to allow cookies, and we’ve tried on Android 2.3.4.) Is there some trick to getting a scenario like this working? Or, do we have to write some sort of custom authentication mechanism? If so, how? And, what? Or, should we use something like claims-based authentication and WIF? Or??? Any help is appreciated. Thanks!

    Read the article

  • Empty dict-like collection problem in SQLAlchemy

    - by maksymko
    I have a mapping in SQLAlchemy that looks like this: t_property_value = sa.Table('property_value', MetaData, autoload = True, autoload_with = engine) orm.mapper(PropertyValue, t_property_value) t_estate = sa.Table('estate', MetaData, autoload = True, autoload_with = engine) orm.mapper(Estate, t_estate, properties = dict( property_hash = orm.relation(PropertyValue, collection_class = column_mapped_collection(t_property_value.c.property_id)) )) Now, everything seems to be fine, when I load the Estate object and it has some relations to PropertyValue objects. However, when it does not, then property_hash attribute is None, instead of being something dict-like, so I can not add new relations like this: estate.property_hash[prop_id] = PropertyValue(...) because I get the "'NoneType' object does not support item assignment" error. So, is there any way to force SQLAlchemy to create proper empty collection?

    Read the article

  • file to map/reduce program

    - by vana
    Hi , I am working on extracting Parts Of speech (POS) using xml documents and I have a englishPCFG.ser.gz file which is used in extracting POS on xml files. I cannot send this .gz file as input in HDFS directory, but my program uses it for parsing xml files. The file is in my local directory. I am getting "File Not Found" error when I run my mapreduce program. How can i make it available to mapper? I tried placing the file in HDFS where my xml files are present. I also tried adding it in .jar along with class files but not luck. I tried to change the hdfs-default.xml with entry to local directory, still doesnt work. Please let me know how to make mapper read this file? Thank you,

    Read the article

  • Hadoop Map Reduce job never finishes

    - by rohanbk
    I am running a Hadoop Map Reduce job using a Python Mapper and Reducer script, and Hadoop Streaming. Both my Map and Reduce jobs run till they are both 100%, but the job doesn't end. I know that when things go sour, Hadoop will terminate the job, but in this case, both stages reach a 100% and just never end. Has anyone else encountered anything similar? Also, how do I debug my program to figure out where things are going wrong? If I use a smaller input file, and I just run something like: $> cat input_file | mapper.py | sort | reduce.py >> output_file everything works perfectly fine. However, when I use Hadoop, things don't work out.

    Read the article

  • trying to make a scheme procedure

    - by asfejaeofijfe
    trying to make a scheme procedure called make-odd-mapper! its supposed to be a procedure that takes one input, a procedure, and produces a procedure as an output ex: (define i4 (mlist 10 2 30 4)) (i4) {10 2 30 4} ((make-odd-mapper! add-one) i4) i4 {11 2 31 4} I know the problem needs to mutate the input list and that set-mcar! and void are apart of it......could anyone give me some reasonable lines of code to solve this? It would be useful in case anyone was wondering about mutation.....and using it to create a procedure that makes a procedure as its output....

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >