Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 656/717 | < Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >

  • Scared of Calculus - Required to pass Differential Calculus as part of my Computer science major

    - by ke3pup
    Hi guys I'm finishing my Computer science degree in university but my fear of maths (lack of background knowledge) made me to leave all my maths units til' the very end which is now. i either take them on and pass or have to give up. I've passed all my programming units easily but knowing my poor maths skills won't do i've been staying clear of the maths units. I have to pass Differential Calculus and Linear Algebra first. With a help of book named "Linear Algebra: A Modern Introduction" i'm finding myself on track and i think i can pass the Linear Algebra unit. But with differential calculus i can't find a book to help me. They're either too advanced or just too simple for what i have to learn. The things i'm required to know for this units are: Set notation, the real number line, Complex numbers in cartesian form. Complex plane, modulus. Complex numbers in polar form. De Moivre’s Theorem. Complex powers and nth roots. Definition of ei? and ez for z complex. Applications to trigonometry. Revision of domain and range of a function Working in R3. Curves and surfaces. Functions of 2 variables. Level curves.Partial derivatives and tangent planes. The derivative as a difference quotient. Geometric significance of the derivative. Discussion of limit. Higher order partial derivatives. Limits of f(x,y). Continuity. Maxima and minima of f(x,y). The chain rule. Implicit differentiation. Directional derivatives and the gradient. Limit laws, l’Hoˆpital’s rule, composition law. Definition of sinh and cosh and their inverses. Taylor polynomials. The remainder term. Taylor series. Is there a book to help me get on track with the above? Being a student i can't buy too many books hence why i'm looking for a book that covers topics I need to know. The University library has a fairly limited collection which i took as loan but didn't find useful as it was too complex.

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • Linux Device Driver: Symbol "memcpy" not found

    - by Hinton
    Hello, I'm trying to write a Linux device driver. I've got it to work really well, until I tried to use "memcpy". I don't even get a compiler error, when I "make" it just warns me: WARNING: "memcpy" [/root/homedir/sv/main.ko] undefined! OK and when I try to load via insmod, I get on the console: insmod: error inserting './main.ko': -1 Unknown symbol in module and on dmesg: main: Unknown symbol memcpy (err 0) I include the following: #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/init.h> #include <linux/kernel.h> /* printk() */ #include <linux/slab.h> /* kmalloc() */ #include <linux/fs.h> /* everything... */ #include <linux/errno.h> /* error codes */ #include <linux/types.h> /* size_t */ #include <linux/fcntl.h> /* O_ACCMODE */ #include <linux/cdev.h> #include <asm/system.h> /* cli(), *_flags */ #include <asm/uaccess.h> /* copy_*_user */ The function using memcpy: static int dc_copy_to_user(char __user *buf, size_t count, loff_t *f_pos, struct sv_data_dev *dev) { char data[MAX_KEYLEN]; size_t i = 0; /* Copy the bulk as long as there are 10 more bytes to copy */ while (i < (count + MAX_KEYLEN)) { memcpy(data, &dev->data[*f_pos + i], MAX_KEYLEN); ec_block(dev->key, data, MAX_KEYLEN); if (copy_to_user(&buf[i], data, MAX_KEYLEN)) { return -EFAULT; } i += MAX_KEYLEN; } return 0; } Could someone help me? I thought the thing was in linux/string.h, but I get the error just the same. I'm using kernel 2.6.37-rc1 (I'm doing in in user-mode-linux, which works only since 2.6.37-rc1). Any help is greatly appreciated. # Context dependent makefile that can be called directly and will invoke itself # through the kernel module building system. KERNELDIR=/usr/src/linux ifneq ($(KERNELRELEASE),) EXTRA_CFLAGS+=-I $(PWD) -ARCH=um obj-m := main.o else KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD = $(shell pwd) all: $(MAKE) V=1 ARCH=um -C $(KERNELDIR) M=$(PWD) modules clean: rm -rf Module.symvers .*.cmd *.ko .*.o *.o *.mod.c .tmp_versions *.order endif

    Read the article

  • Adding Icons next to items in Navigation Drawer

    - by DunriteJW
    I have been trying to figure this out for quite some time right now. I've looked all over this site and many others, and can't find anything that works. I simply want icons next to each item in my navigation drawer. I am currently using the method that Google's navigation drawer sample app uses. in the MainActivity.java I have the following: mColorTitles = getResources().getStringArray(R.array.colors_array); mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout); mDrawerList = (ListView) findViewById(R.id.left_drawer); mColorIcons = getResources().getStringArray(R.array.color_icons); adapter = new ArrayAdapter<String>(this, R.layout.drawer_list_item, mColorTitles); // set a custom shadow that overlays the main content when the drawer opens mDrawerLayout.setDrawerShadow(R.drawable.drawer_shadow, GravityCompat.START); // set up the drawer's list view with items and click listener mDrawerList.setAdapter(adapter); mDrawerList.setOnItemClickListener(new DrawerItemClickListener()); my drawer_list_item.xml: <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/text1" android:layout_width="match_parent" android:layout_height="match_parent" android:textAppearance="?android:attr/textAppearanceListItemSmall" android:gravity="center_vertical" android:paddingLeft="5dp" android:paddingRight="16dp" android:textColor="#000" android:background="?android:attr/activatedBackgroundIndicator" android:minHeight="?android:attr/listPreferredItemHeightSmall"/> it currently just makes the navigation drawer display the color titles from the array. I have the icons that I want in another array, and they follow the exact same order as I want them associated with the colors. I just have no idea how to even begin inserting the icons from that array into the navigation items if it helps, here's what my arrays look like in my strings.xml (not full code) <string-array name="colors_array"> <item>Home</item> <item>Cherry</item> <item>Crimson</item> ... <array name="color_icons"> <item>@drawable/homeicon</item> <item>@drawable/cherryicon</item> <item>@drawable/crimsonicon</item> ... I've tried putting a drawable in the drawer_list_item, which works, but (of course) it always puts the same one in there. I could not think of a way to change it according to the color. I am relatively new to android programming, so if I am missing something simple, I'm sorry. If you could help me out, I would greatly appreciate it, as this is basically the last thing I need to do before I publish my application to the Play Store. Thanks in advance!

    Read the article

  • H.264 over RTP - Identify SPS and PPS Frames

    - by Toby
    I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg. So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding RTSP communication. Additionally the camera sends the SPS and the PPSin two single messages before starting with the video stream itself. So I capture the messages in this order: 1. Preceding RTSP Communication here ( including SDP with SPS and PPS ) 2. RTP Frame with Payload: 67 42 80 28 DA 01 40 16 C4 // This is the SPS 3. RTP Frame with Payload: 68 CE 3C 80 // This is the PPS 4. RTP Frame with Payload: ... // Video Data Then there come some Frames with Payload and at some point a RTP Frame with the Marker Bit = 1. This means ( if I got it right) that I have a complete video frame. Afer this I write the Prefix Sequence ( 00 00 01 ) and the NALfrom the payload again and go on with the same procedure. Now my camera sends me after every 8 complete Video Frames the SPS and the PPS again. ( Again in two RTP Frames, as seen in the example above ). I know that especially the PPS can change in between streaming but that's not the problem. My questions are now: 1. Do I need to write the SPS/PPS every 8th Video Frame? If my SPS and my PPS don't change it should be enough to have them written at the very beginning of my file and nothing more? 2. How to distinguish between SPS/PPS and normal RTP Frames? In my C++ Code which parses the transmitted data I need make a difference between the RTP Frames with normal Payload an the ones carrying the SPS/PPS. How can I distinguish them? Okay the SPS/PPS frames are usually way smaller, but that's not a save call to rely on. Because if I ignore them I need to know which data I can throw away, or if I need to write them I need to put the 00 00 01 Prefix in front of them. ? Or is it a fixed rule that they occur every 8th Video Frame?

    Read the article

  • quartz: preventing concurrent instances of a job in jobs.xml

    - by Jason S
    This should be really easy. I'm using Quartz running under Apache Tomcat 6.0.18, and I have a jobs.xml file which sets up my scheduled job that runs every minute. What I would like to do, is if the job is still running when the next trigger time rolls around, I don't want to start a new job, so I can let the old instance complete. Is there a way to specify this in jobs.xml (prevent concurrent instances)? If not, is there a way I can share access to an in-memory singleton within my application's Job implementation (is this through the JobExecutionContext?) so I can handle the concurrency myself? (and detect if a previous instance is running) update: After floundering around in the docs, here's a couple of approaches I am considering, but either don't know how to get them to work, or there are problems. Use StatefulJob. This prevents concurrent access... but I'm not sure what other side-effects would occur if I use it, also I want to avoid the following situation: Suppose trigger times would be every minute, i.e. trigger#0 = at time 0, trigger #1 = 60000msec, #2 = 120000, #3 = 180000, etc. and the trigger#0 at time 0 fires my job which takes 130000msec. With a plain Job, this would execute triggers #1 and #2 while job trigger #0 is still running. With a StatefulJob, this would execute triggers #1 and #2 in order, immediately after #0 finishes at 130000. I don't want that, I want #1 and #2 not to run and the next trigger that runs a job should take place at #3 (180000msec). So I still have to do something else with StatefulJob to get it to work the way I want, so I don't see much of an advantage to using it. Use a TriggerListener to return true from vetoJobExecution(). Although implementing the interface seems straightforward, I have to figure out how to setup one instance of a TriggerListener declaratively. Can't find the docs for the xml file. Use a static shared thread-safe object (e.g. a semaphore or whatever) owned by my class that implements Job. I don't like the idea of using singletons via the static keyword under Tomcat/Quartz, not sure if there are side effects. Also I really don't want them to be true singletons, just something that is associated with a particular job definition. Implement my own Trigger which extends SimpleTrigger and contains shared state that could run its own TriggerListener. Again, I don't know how to setup the XML file to use this trigger rather than the standard <trigger><simple>...</simple></trigger>.

    Read the article

  • hand coding a parser

    - by John Leidegren
    For all you compiler gurus, I wanna write a recursive descent parser and I wanna do it with just code. No generating lexers and parsers from some other grammar and don't tell me to read the dragon book, i'll come around to that eventually. I wanna get into the gritty details about implementing a lexer and parser for a reasonable simple langauge, say CSS. And I wanna do this right. This will probably end up being a series of questions but right now I'm starting with a lexer. Tokenization rules for CSS can be found here. I find my self writing code like this (hopefully you can infer the rest from this snippet): public CssToken ReadNext() { int val; while ((val = _reader.Read()) != -1) { var c = (char)val; switch (_stack.Top) { case ParserState.Init: if (c == ' ') { continue; // ignore } else if (c == '.') { _stack.Transition(ParserState.SubIdent, ParserState.Init); } break; case ParserState.SubIdent: if (c == '-') { _token.Append(c); } _stack.Transition(ParserState.SubNMBegin); break; What is this called? and how far off am I from something reasonable well understood? I'm trying to balence something which is fair in terms of efficiency and easy to work with, using a stack to implement some kind of state machine is working quite well, but I'm unsure how to continue like this. What I have is an input stream, from which I can read 1 character at a time. I don't do any look a head right now, I just read the character then depending on the current state try to do something with that. I'd really like to get into the mind set of writing reusable snippets of code. This Transition method is currently means to do that, it will pop the current state of the stack and then push the arguments in reverse order. That way, when I write Transition(ParserState.SubIdent, ParserState.Init) it will "call" a sub routine SubIdent which will, when complete, return to the Init state. The parser will be implemented in much the same way, currently, having everyhing in a single big method like this allows me to easily return a token when I found one, but it also forces me to keep everything in one single big method. Is there a nice way to split these tokenization rules into seperate methods? Any input/advice on the matter would be greatly appriciated!

    Read the article

  • nhibernate subclass in code

    - by Antonio Nakic Alfirevic
    I would like to set up table-per-classhierarchy inheritance in nhibernate thru code. Everything else is set in XML mapping files except the subclasses. If i up the subclasses in xml all is well, but not from code. This is the code i use - my concrete subclass never gets created:( //the call NHibernate.Cfg.Configuration config = new NHibernate.Cfg.Configuration(); SetSubclass(config, typeof(TAction), typeof(tActionSub1), "Procedure"); //the method public static void SetSubclass(Configuration configuration, Type baseClass, Type subClass, string discriminatorValue) { PersistentClass persBaseClass = configuration.ClassMappings.Where(cm => cm.MappedClass == baseClass).Single(); SingleTableSubclass persSubClass = new SingleTableSubclass(persBaseClass); persSubClass.ClassName = subClass.AssemblyQualifiedName; persSubClass.DiscriminatorValue = discriminatorValue; persSubClass.EntityPersisterClass = typeof(SingleTableEntityPersister); persSubClass.ProxyInterfaceName = (subClass).AssemblyQualifiedName; persSubClass.NodeName = subClass.Name; persSubClass.EntityName = subClass.FullName; persBaseClass.AddSubclass(persSubClass); } the Xml mapping looks like this: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Riz.Pcm.Domain.BusinessObjects" assembly="Riz.Pcm.Domain"> <class name="Riz.Pcm.Domain.BusinessObjects.TAction, Riz.Pcm.Domain" table="dbo.tAction" lazy="true"> <id name="Id" column="ID"> <generator class="guid" /> </id> <discriminator type="String" formula="(select jt.Name from TJobType jt where jt.Id=JobTypeId)" insert="true" force="false"/> <many-to-one name="Session" column="SessionID" class="TSession" /> <property name="Order" column="Order1" /> <property name="ProcessStart" column="ProcessStart" /> <property name="ProcessEnd" column="ProcessEnd" /> <property name="Status" column="Status" /> <many-to-one name="JobType" column="JobTypeID" class="TJobType" /> <many-to-one name="Unit" column="UnitID" class="TUnit" /> <bag name="TActionProperties" lazy="true" cascade="all-delete-orphan" inverse="true" > <key column="ActionID"></key> <one-to-many class="TActionProperty"></one-to-many> </bag> <!--<subclass name="Riz.Pcm.Domain.tActionSub" discriminator-value="ZPower"></subclass>--> </class> </hibernate-mapping> What am I doing wrong? I can't find any examples on google:(

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • I have a php form dropdown menu that needs to send information

    - by shinjuo
    I have a dropdown menu that is filled by a mysql database. I need to select one and have it send the information for use on the next page after clicking submit. It does populate the drop down menu like it is supposed to it just does not seem to catch the data on the next page. Here is what I have: removeMain.php <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title></title> </head> <form action="remove.php" method="post"> <?php $link = mysql_connect('********', '********', '*********'); if (!$link){ die('Could not connect: ' . mysql_error()); } mysql_select_db("********", $link); $res = mysql_query("SELECT * FROM cardLists order by cardID") or die(mysql_error()); echo "<select name = CardID>"; while($row=mysql_fetch_assoc($res)) { echo "<option value=$row[ID]>$row[cardID]</a></option>"; } echo "</select>"; ?> Amount to Remove: <input type="text" name="Remove" /> <input type="submit" /> </form> <body> </body> </html> remove.php <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title></title> </head> <body> <?php $link = mysql_connect('*********', '*********', '*********'); if (!$link){ die('Could not connect: ' . mysql_error()); } mysql_select_db("***********y", $link); $query = sprintf("UPDATE cardLists SET AmountLeft = AmountLeft - %s WHERE cardID = '%s'", mysql_real_escape_string($_POST["Remove"]), mysql_real_escape_string($_POST["CardID"])); mysql_query($query); mysql_close($link); ?> <br /> <a href="removeMain.php"> <input type="submit" name="return" id="return" value="Update More" /></a> <a href="index.php"> <input type="submit" name="main" id="main" value="Return To Main" /></a> </body> </html>

    Read the article

  • Deleting items from a ListView using a custom BaseAdapter

    - by HXCaine
    I am using a customised BaseAdapter to display items on a ListView. The items are just strings held in an ArrayList. The list items have a delete button on them (big red X), and I'd like to remove the item from the ArrayList, and notify the ListView to update itself. However, every implementation I've tried gets mysterious position numbers given to it, so for example clicking item 2's delete button will delete item 5's. It seems to be almost entirely random. One thing to note is that elements may be repeated, but must be kept in the same order. For example, I can have "Irish" twice, as elements 3 and 7. My code is below: private static class ViewHolder { TextView lang; int position; } public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; if (convertView == null) { convertView = mInflater.inflate(R.layout.language_link_row, null); holder = new ViewHolder(); holder.lang = (TextView)convertView.findViewById(R.id.language_link_text); holder.position = position; final ImageView deleteButton = (ImageView) convertView.findViewById(R.id.language_link_cross_delete); deleteButton.setOnClickListener(this); convertView.setTag(holder); deleteButton.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.lang.setText(mLanguages.get(position)); return convertView; } I later attempt to retrieve the deleted element's position by grabbing the tag, but it's always the wrong position in the list. There is no noticeable pattern to the position given here, it always seems random. // The delete button's listener public void onClick(View v) { ViewHolder deleteHolder = (ViewHolder) v.getTag(); int pos = deleteHolder.position; ... ... ... } I would be quite happy to just delete the item from the ArrayList and have the ListView update itself, but the position I'm getting is incorrect so I can't do that. Please note that I did, at first, have the deleteButton clickListener inside the getView method, and used 'position' to delete the value, but I had the same problem. Any suggestions appreciated, this is really irritating me.

    Read the article

  • Efficiency of data structures in C99 (possibly affected by endianness)

    - by Ninefingers
    Hi All, I have a couple of questions that are all inter-related. Basically, in the algorithm I am implementing a word w is defined as four bytes, so it can be contained whole in a uint32_t. However, during the operation of the algorithm I often need to access the various parts of the word. Now, I can do this in two ways: uint32_t w = 0x11223344; uint8_t a = (w & 0xff000000) >> 24; uint8_t b = (w & 0x00ff0000) >> 16; uint8_t b = (w & 0x0000ff00) >> 8; uint8_t d = (w & 0x000000ff); However, part of me thinks that isn't particularly efficient. I thought a better way would be to use union representation like so: typedef union { struct { uint8_t d; uint8_t c; uint8_t b; uint8_t a; }; uint32_t n; } word32; Using this method I can assign word32 w = 0x11223344; then I can access the various parts as I require (w.a=11 in little endian). However, at this stage I come up against endianness issues, namely, in big endian systems my struct is defined incorrectly so I need to re-order the word prior to it being passed in. This I can do without too much difficulty. My question is, then, is the first part (various bitwise ands and shifts) efficient compared to the implementation using a union? Is there any difference between the two generally? Which way should I go on a modern, x86_64 processor? Is endianness just a red herring here? I could inspect the assembly output of course, but my knowledge of compilers is not brilliant. I would have thought a union would be more efficient as it would essentially convert to memory offsets, like so: mov eax, [r9+8] Would a compiler realise that is what happening in the bit-shift case above? If it matters, I'm using C99, specifically my compiler is clang (llvm). Thanks in advance.

    Read the article

  • Is this a situation where Qt Model/View architecture is not useful?

    - by csmithmaui
    Hi, I am writing a GUI based application where I read a string of values from serial port every few seconds and I need to display most of the values in some type graphical indicator(I was thinking of QprogressBar maybe) that displays the range and the value. Some of the other data that I am parsing from the string are the date and fault codes. Also, the data is hierarchical. I wanted to use the model/view architecture of Qt because I have been interested in MVC stuff for a while but have never quite wrapped my brain around how to implement it very well. As of now, I have subclassed QAbstractItemModel and in the model I read the serial port and wrap the items parsed from the string in a Tree data structure. I can view all of the data in a QtreeView with no issues. I have also began to subclass QAbstractItemView to build my custom view with all of the Graphical Indicators and such. This is where I am getting stuck. It seems to me that in order for me to design a view that knows how to display my custom model the view needs to know exactly how all of the data in the model is organized. Doesn't that defeat the purpose of Model/View? The QTreeView I tested the model with is basically just displaying the model as it is setup in the Tree structure but I don't want to do that because the data is not all of the same type. Is the type of data or the way you would like to present it to the user a determining factor in whether or not you should use this architecture? I always assumed it was just always better to design in an MVC style. It seems to me like it might have been better to just subclass QWidget and then read in from the serial port and update all of subwidgets(graphical indicators, labels, etc...) from the subclass. Essentially, do everything in one class. Does anybody understand this issue that can explain to me either what I am missing or why I shouldn't be doing it this way. As of now I am a little confused. Thanks so much for any help!

    Read the article

  • iPhone: value of selectedIndex for tab should be consistent, but isn't

    - by Janine
    This should be so simple... but something screwy is happening. My setup looks like this: MainViewController Tab Bar Controller 4 tabs, each of which loads WebViewController My AppDelegate contains an ivar, tabBarController, which is connected to the tab bar controller (this was all set up in Interface Builder). The leftmost tab is marked "selected" in IB. Within the viewWillAppear method in WebViewController, I need to know which tab was just selected so I can load the correct URL. I do this by switching on appDelegate.tabBarController.selectedIndex. When the app first runs and the leftmost tab is selected, selectedIndex is a large garbage value. After that, I get values from 0 to 3, which is as it should be, but they are in random order. Not only that, but each tab I touch reports a different value each time. This app is extremely simple right now and I can't imagine what I could have done to make things go this wrong. Has anyone seen (and hopefully solved) this behavior? Update: we have a request for code. There's not much to see. The tab bar controller gets loaded in applicationDidFinishLaunching: [self.mainViewController view]; //force nib to load [self.window addSubview:self.mainViewController.tabBarController.view] There is currently no code whatsoever in MainViewController.m other than the synthesize and release for tabBarController. From WebVewController.m: - (void)viewWillAppear:(BOOL)_animation { [super viewWillAppear:_animation]; NSURL *url; switch([S_UIDelegate mainViewController].tabBarController.selectedIndex) { case 0: url = [NSURL URLWithString:@"http://www.cnn.com"]; break; case 1: url = [NSURL URLWithString:@"http://www.facebook.com"]; break; case 2: url = [NSURL URLWithString:@"http://www.twitter.com"]; break; case 3: url = [NSURL URLWithString:@"http://www.google.com"]; break; default: url = [NSURL URLWithString:@"http://www.msnbc.com"]; } NSURLRequest *request = [NSURLRequest requestWithURL:url]; [webView loadRequest:request]; } This is where I'm seeing the random values.

    Read the article

  • Button.MouseDown

    - by Gilad
    Hi Guys, I'm relatively new with WPF. I'm trying to understand the difference between MouseDownEvent and PreviewMouseDownEvent. I understand the WPF event strategies and i understand that the MouseDown event is a bubbling event and the PreviewMouseDown is a tunneling event. I also understand the order of which these events are being fired - according to this MSDN overview http://msdn.microsoft.com/en-us/library/ms742806.aspx#routing (there is a diagram with example there). So i tried to code some my self, check this for example: <Grid x:Name="grid" Width="250"> <StackPanel Mouse.MouseDown="StackPanel_MouseDown" PreviewMouseDown="StackPanel_PreviewMouseDown"> <WPFVisualizerExample:MyButton x:Name="B1" PreviewMouseDown="B1_PreviewMouseDown" MouseDown="B1_MouseDown" Margin="5,5,5,5"> <WPFVisualizerExample:MyButton x:Name="B2" PreviewMouseDown="B2_PreviewMouseDown" MouseDown="B2_MouseDown" Margin="5,5,5,5"> <WPFVisualizerExample:MyButton x:Name="B3" PreviewMouseDown="B3_PreviewMouseDown" MouseDown="B3_MouseDown" Margin="5,5,5,5">Click Me</WPFVisualizerExample:MyButton> </WPFVisualizerExample:MyButton> </WPFVisualizerExample:MyButton> </StackPanel> </Grid> I have an event handler for each of the events (the preview and non-preview) and i wanted to see what is happening, which of the event is being thrown (i have a message box shown for each event). The 'MyButton' user control simply extends the base Button and override the OnMouseDown and OnPreviewMouseDown to set the e.Handled false: protected override void OnMouseDown(System.Windows.Input.MouseButtonEventArgs e) { base.OnMouseDown(e); e.Handled = false; } protected override void OnPreviewMouseDown(System.Windows.Input.MouseButtonEventArgs e) { base.OnPreviewMouseDown(e); e.Handled = false; } (tried with this and without this). According to the MSDN overview (in the link above), if i have 3 elements then the events route should be as follows: PreviewMouseDown (tunnel) on root element. PreviewMouseDown (tunnel) on intermediate element #1. PreviewMouseDown (tunnel) on source element #2. MouseDown (bubble) on source element #2. MouseDown (bubble) on intermediate element #1. MouseDown (bubble) on root element. So I expected the the message boxes to be shown according to the above. From some reason - which I don't understand only the preview events are being thrown (according to what the MSDN says Preview_B1=Preview_B2=Preview_B3). My expectations were: Preview_B1=Preview_B2=Preview_B3=NonPreview_B3=NonPreview_B2=NonPreview_B1. But the non-preview events are not being thrown at all. So basically I don't understand the route of the events, from MSDN overview I understood that the route starts from the root element, goes down (tunnel) to the source element and then back up (bubble) to the root element, but this is not what happening in practice. It is really important for me to understand how this events are working, i probably miss-understand something basic here, your help will be appreciated. THANX!! -Gili

    Read the article

  • POD global object initialization

    - by paercebal
    I've got bitten today by a bug. The following source can be copy/pasted (and then compiled) into a main.cpp file #include <iostream> // The point of SomeGlobalObject is for its // constructor to be launched before the main // ... struct SomeGlobalObject { SomeGlobalObject() ; } ; // ... // Which explains the global object SomeGlobalObject oSomeGlobalObject ; // A POD... I was hoping it would be constructed at // compile time when using an argument list struct MyPod { short m_short ; const char * const m_string ; } ; // declaration/Initialization of a MyPod array MyPod myArrayOfPod[] = { { 1, "Hello" }, { 2, "World" }, { 3, " !" } } ; // declaration/Initialization of an array of array of void * void * myArrayOfVoid[][2] = { { (void *)1, "Hello" }, { (void *)2, "World" }, { (void *)3, " !" } } ; // constructor of the global object... Launched BEFORE main SomeGlobalObject::SomeGlobalObject() { std::cout << "myArrayOfPod[0].m_short : " << myArrayOfPod[0].m_short << std::endl ; std::cout << "myArrayOfVoid[0][0] : " << myArrayOfVoid[0][0] << std::endl ; } // main... What else ? int main(int argc, char* argv[]) { return 0 ; } MyPod being a POD, I believed there would be no constructors. Only initialization at compile time. Thus, the global object SomeGlobalObject would have no problem to use the global array of PODs upon its construction. The problem is that in real life, nothing is so simple. On Visual C++ 2008 (I did not test on other compilers), upon execution myArrayOfPodis not initialized, even ifmyArrayOfVoid` is initialized. So my questions is: Are C++ compilers not supposed to initialize global PODs (including POD structures) at compilation time ? Note that I know global variable are evil, and I know that one can't be sure of the order of creation of global variables declared in different compilation units. The problem here is really the POD C-like initialization which seems to call a constructor (the default, compiler-generated one?). And to make everyone happy: This is on debug. On release, the global array of PODs is correctly initialized.

    Read the article

  • NHibernate criteria query question

    - by Chris
    I have 3 related objects (Entry, GamePlay, Prize) and I'm trying to find the best way to query them for what I need using NHibernate. When a request comes in, I need to query the Entries table for a matching entry and, if found, get a) the latest game play along with the first game play that has a prize attached. Prize is a child of GamePlay and each Entry object has a GamePlays property (IList). Currently, I'm working on a method that pulls the matching Entry and eagerly loads all game plays and associated prizes, but it seems wasteful to load all game plays just to find the latest one and any that contain a prize. Right now, my query looks like this: var entry = session.CreateCriteria<Entry>() .Add(Restrictions.Eq("Phone", phone)) .AddOrder(Order.Desc("Created")) .SetFetchMode("GamePlays", FetchMode.Join) .SetMaxResults(1).UniqueResult<Entry>(); Two problems with this: It loads all game plays up front. With 365 days of data, this could easily balloon to 300k of data per query. It doesn't eagerly load the Prize child property for each game. Therefore, my code that loops through the GamePlays list looking for a non-null Prize must make a call to load each Prize property I check. I'm not an nhibernate expert, but I know there has to be a better way to do this. Ideally, I'd like to do the following (pseudocode): entry = findEntry(phoneNumber) lastPlay = getLatestGamePlay(Entry) firstWinningPlay = getFirstWinningGamePlay(Entry) The end result of course is that I have the entry details, the latest game play, and the first winning game play. The catch is that I want to do this in as few database calls as possible, otherwise I'd just execute 3 separate queries. The object definitions look like: public class Entry { public Guid Id {get;set;} public string Phone {get;set;} public IList<GamePlay> GamePlays {get;set;} // ... other properties } public class GamePlay { public Guid Id {get;set;} public Entry Entry {get;set;} public Prize Prize {get;set;} // ... other properties } public class Prize { public Guid Id {get;set;} // ... other properties } The proper NHibernate mappings are in place, so I just need help figuring out how to set up the criteria query (not looking for HQL, don't use it).

    Read the article

  • Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • Subroutine & GoTo design

    - by sub
    I have a strange question concerning subroutines: As I'm creating a minimal language and I don't want to add high-level loops like while or for I was planning on just adding gotos to keep it Turing-Complete. Now I thought, eww - gotos - I wouldn't want to program in that language if I had to use gotos so often. So I thought about adding subroutines instead. I see the difference as the following: gotos Go to (captain obvious) a previously defined point and continue executing the program from there. Leads to hardly understandable and buggy code, I think that's a fact. subroutines Similiar: You define their starting point somewhere, as you call them the program jumps there - but the subroutine can go back to the point it was called from with return. Okay. Why didn't I just add the more function-like, nice looking subroutines? Because: In order to make return work if I call subroutines from within subroutines from within other subroutines, I'd have to use a stack containing the point where the currently running subroutine came from at top. That would then mean that I would, if I create loops using the subroutines, end up with an extremely memory-eating, overflowing stack with return locations. Not good. Don't think of my subroutines as functions. They are just gotos that return to the point they were called from, they don't actually give back values like the return x; statement in nearly all today's languages. Now to my actual questions: How can I solve the above problem with the stack overflow on loops with subroutines? Do I have to add a separate goto language construct without the return option? Assembler doesn't have loops but as I have seen myJumpPoint:, jnz, jz, retn. That means to me that there must also be a stack containing all the return locations. Am I right with that? What about long running loops then? Don't they overflow the stack/eat memory then? Am I getting the retn symbol in assembler totally wrong? If yes, please explain it to me.

    Read the article

  • sorting hashes inside an array on values

    - by srk
    @aoh =( { 3 => 15, 4 => 8, 5 => 9, }, { 3 => 11, 4 => 25, 5 => 6, }, { 3 => 5, 4 => 18, 5 => 5, }, { 0 => 16, 1 => 11, 2 => 7, }, { 0 => 21, 1 => 13, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); I want the hashes in each array index sorted in reverse order based on values.. @sorted = sort { ........... please fill this..........} @aoh; expected output @aoh =( { 4 => 8, 5 => 9, 3 => 15, }, { 5 => 6, 3 => 11, 4 => 25, }, { 5 => 5, 3 => 5, 4 => 18, }, { 2 => 7, 1 => 11, 0 => 16, }, { 1 => 13, 0 => 21, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); Please help.. Thanks in advance.. Stating my request again: I only want the hashes in each array index to be sorted by values.. i dont want the array to be sorted..

    Read the article

  • For loop from assembly to C

    - by FranXh
    I have a bomb project where I need to defuse certain phases by finding "pas phrases" that will defuse the bomb. Right now I have been working with phase_2, for which the assembly code is shown below. Phase_2 requires as an input 6 numbers, which I need to find in order to defuse this phase. I analyzed this assembly, and I came up with the C code below, that covers lines from 40101c to 401044. It is basically a for loop that makes sure that elements t[0]==t[3], t[1]==t[4] and t[2]==t[5] that the user inputs, are equal. According to my logic, the user can input any 6 numbers as long as the above condition is satisfied. Say 1, 2, 3, 1, 2, 3 would be a valid "pas phrase". However this solution does not convince me for some reason. Am I doing something wrong? 0000000000400ffc <phase_2>: 400ffc: 48 89 5c 24 e0 mov %rbx,-0x20(%rsp) 401001: 48 89 6c 24 e8 mov %rbp,-0x18(%rsp) 401006: 4c 89 64 24 f0 mov %r12,-0x10(%rsp) 40100b: 4c 89 6c 24 f8 mov %r13,-0x8(%rsp) 401010: 48 83 ec 48 sub $0x48,%rsp 401014: 48 89 e6 mov %rsp,%rsi 401017: e8 65 0a 00 00 callq 401a81 <read_six_numbers> 40101c: 48 89 e5 mov %rsp,%rbp 40101f: 4c 8d 6c 24 0c lea 0xc(%rsp),%r13 401024: 41 bc 00 00 00 00 mov $0x0,%r12d 40102a: 48 89 eb mov %rbp,%rbx 40102d: 8b 45 0c mov 0xc(%rbp),%eax 401030: 39 45 00 cmp %eax,0x0(%rbp) 401033: 74 05 je 40103a <phase_2+0x3e> 401035: e8 2d 09 00 00 callq 401967 <_GLOBAL_RESET_> 40103a: 44 03 23 add (%rbx),%r12d 40103d: 48 83 c5 04 add $0x4,%rbp 401041: 4c 39 ed cmp %r13,%rbp 401044: 75 e4 jne 40102a <phase_2+0x2e> 401046: 45 85 e4 test %r12d,%r12d 401049: 75 05 jne 401050 <phase_2+0x54> 40104b: e8 17 09 00 00 callq 401967 <_GLOBAL_RESET_> 401050: 48 8b 5c 24 28 mov 0x28(%rsp),%rbx 401055: 48 8b 6c 24 30 mov 0x30(%rsp),%rbp 40105a: 4c 8b 64 24 38 mov 0x38(%rsp),%r12 40105f: 4c 8b 6c 24 40 mov 0x40(%rsp),%r13 401064: 48 83 c4 48 add $0x48,%rsp 401068: c3 for (int i=0; i<3; i++){ if(t[i] != t[i+3]){ explode(); } }

    Read the article

  • handling NSStream events when using EASession in MonoTouch

    - by scotru
    Does anyone have an example of how to handle read and write NSStream events in Monotouch when working with accessories via EASession? It looks like there isn't a strongly typed delegate for this and I'm having trouble figuring out what selectors I need to handle on the delegates of my InputStream and OutputStream and what I actually need to do with each selector in order to properly fill and empty the buffers belonging to the EASession object. Basically, I'm trying to port Apple's EADemo app to Monotouch right now. Here's the Objective-C source that I think is relevant to this problem: / / asynchronous NSStream handleEvent method - (void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode { switch (eventCode) { case NSStreamEventNone: break; case NSStreamEventOpenCompleted: break; case NSStreamEventHasBytesAvailable: [self _readData]; break; case NSStreamEventHasSpaceAvailable: [self _writeData]; break; case NSStreamEventErrorOccurred: break; case NSStreamEventEndEncountered: break; default: break; } } / low level write method - write data to the accessory while there is space available and data to write - (void)_writeData { while (([[_session outputStream] hasSpaceAvailable]) && ([_writeData length] > 0)) { NSInteger bytesWritten = [[_session outputStream] write:[_writeData bytes] maxLength:[_writeData length]]; if (bytesWritten == -1) { NSLog(@"write error"); break; } else if (bytesWritten > 0) { [_writeData replaceBytesInRange:NSMakeRange(0, bytesWritten) withBytes:NULL length:0]; } } } // low level read method - read data while there is data and space available in the input buffer - (void)_readData { #define EAD_INPUT_BUFFER_SIZE 128 uint8_t buf[EAD_INPUT_BUFFER_SIZE]; while ([[_session inputStream] hasBytesAvailable]) { NSInteger bytesRead = [[_session inputStream] read:buf maxLength:EAD_INPUT_BUFFER_SIZE]; if (_readData == nil) { _readData = [[NSMutableData alloc] init]; } [_readData appendBytes:(void *)buf length:bytesRead]; //NSLog(@"read %d bytes from input stream", bytesRead); } [[NSNotificationCenter defaultCenter] postNotificationName:EADSessionDataReceivedNotification object:self userInfo:nil]; } I'd also appreciate any architectural recommendations on how to best implement this in monotouch. For example, in the Objective C implementation these functions are not contained in any class--but in Monotouch would it make sense to make them members of my

    Read the article

< Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >