Search Results

Search found 2649 results on 106 pages for 'signal slot'.

Page 73/106 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Fibre channel long distance woes

    - by Marki
    I need a fresh pair of eyes. We're using a 15km fibre optic line across which fibrechannel and 10GbE is multiplexed (passive optical CWDM). For FC we have long distance lasers suitable up to 40km (Skylane SFCxx0404F0D). The multiplexer is limited by the SFPs which can do max. 4Gb fibrechannel. The FC switch is a Brocade 5000 series. The respective wavelengths are 1550,1570,1590 and 1610nm for FC and 1530nm for 10GbE. The problem is the 4GbFC fabrics are almost never clean. Sometimes they are for a while even with a lot of traffic on them. Then they may suddenly start producing errors (RX CRC, RX encoding, RX disparity, ...) even with only marginal traffic on them. I am attaching some error and traffic graphs. Errors are currently in the order of 50-100 errors per 5 minutes when with 1Gb/s traffic. Optics Here is the power output of one port summarized (collected using sfpshow on different switches) SITE-A units=uW (microwatt) SITE-B ********************************************** FAB1 SW1 TX 1234.3 RX 49.1 SW3 1550nm (ko) RX 95.2 TX 1175.6 FAB2 SW2 TX 1422.0 RX 104.6 SW4 1610nm (ok) RX 54.3 TX 1468.4 What I find curious at this point is the asymmetry in the power levels. While SW2 transmits with 1422uW which SW4 receives with 104uW, SW2 only receives the SW4 signal with similar original power only with 54uW. Vice versa for SW1-3. Anyway the SFPs have RX sensitivity down to -18dBm (ca. 20uW) so in any case it should be fine... But nothing is. Some SFPs have been diagnosed as malfunctioning by the manufacturer (the 1550nm ones shown above with "ko"). The 1610nm ones apparently are ok, they have been tested using a traffic generator. The leased line has also been tested more than once. All is within tolerances. I'm awaiting the replacements but for some reason I don't believe it will make things better as the apparently good ones don't produce ZERO errors either. Earlier there was active equipment involved (some kind of 4GFC retimer) before putting the signal on the line. No idea why. That equipment was eliminated because of the problems so we now only have: the long distance laser in the switch, (new) 10m LC-SC monomode cable to the mux (for each fabric), the leased line, the same thing but reversed on the other side of the link. FC switches Here is a port config from the Brocade portcfgshow (it's like that on both sides, obviously) Area Number: 0 Speed Level: 4G Fill Word(On Active) 0(Idle-Idle) Fill Word(Current) 0(Idle-Idle) AL_PA Offset 13: OFF Trunk Port ON Long Distance LS VC Link Init OFF Desired Distance 32 Km Reserved Buffers 70 Locked L_Port OFF Locked G_Port OFF Disabled E_Port OFF Locked E_Port OFF ISL R_RDY Mode OFF RSCN Suppressed OFF Persistent Disable OFF LOS TOV enable OFF NPIV capability ON QOS E_Port OFF Port Auto Disable: OFF Rate Limit OFF EX Port OFF Mirror Port OFF Credit Recovery ON F_Port Buffers OFF Fault Delay: 0(R_A_TOV) NPIV PP Limit: 126 CSCTL mode: OFF Forcing the links to 2GbFC produces no errors, but we bought 4GbFC and we want 4GbFC. I don't know where to look anymore. Any ideas what to try next or how to proceed? If we can't make 4GbFC work reliably I wonder what the people working with 8 or 16 do... I don't assume that "a few errors here and there" are acceptable. Oh and BTW we are in contact with everyone of the manufacturers (FC switch, MUX, SFPs, ...) Except for the SFPs to be changed (some have been changed before) nobody has a clue. Brocade SAN Health says the fabric is ok. MUX, well, it's passive, it's only a prism, nature at it's best. Any shots in the dark? APPENDIX: Answers to your questions @Chopper3: This is the second generation of Brocades exhibiting the problem. Before we had 5000s, now we have 5100s. In the beginning when we still had the active MUX we rented a longdistance laser once to put it into the switch directly in order to make tests for a day, during that day of course it was clean. But as I said, sometimes it's clean just like that. And sometimes it's not. Alternative switches would mean to rebuild the entire SAN with those only to test. Alternative SFPs, well they're hard to come by just like that. @longneck: The line is rented. It's a dark fibre (9um monomode) so there's noone else on it. Sure there are splices. I can't go and look but I have to trust they have been done correctly. As I said the line has been checked and rechecked (using an optical time-domain reflectometer). Obviously you don't have all this equipment yourself because it's way too expensive. @mdpc: What would be the "wrong" type of cable according to you? Up to the switch everything is monomode, yes. The connectors are the correct ones too. Yeah I know there are the green ones where the fibre is cut off at a certain angle etc. But we have the correct ones for all that I know. Progress Report #1 We have had two fabrics (=2x2 switches) with Brocade 5100s with FabricOS 6.4.1 and two fabrics (another 2x4 switches) on FabricOS 7.0.2. On the longdistance ISLs (one in each fabric) it turned out that with FOS 6.4.1 setting it to long distance issues warnings about the VC Init setting and consequently the fill word. But those are only warnings. FOS 7.0.2 requires you to do modifications to VCI and the fillword for long distance links. Setting FOS 6.4.1 to the LS (long-distance static distance) setting with wrong VCI and fillword setting made the whole fabric inoperational (stuck in an SCN loop, use fabriclog -s to see, you don't see it anywhere else, no port error counters or anything increasing). Currently I'm giving the one fabric with the IMHO more correct settings a beating and it seems to do fine, whereas the other one without much traffic still has errors here and there. In short: We have eliminated the active part of the MUX (the FC retimer). We are putting the long distance SFPs into the end equipment themselves. Just to be sure we bought new monomode cables to connect the end equipment to the remaining passive part of the MUX. We are now trying out several long distance configs. It's almost black magic. Everything that happens is mostly empirical, noone seems to have a clue what are the exact reasons to do something. ("We have tried this, and it didn't work, then we tried that and it worked, so we stuck with that." But noone really seems to know why.) I'll keep you updated. Progress Report #2 We got the new lasers for one of the fabrics on warranty. It's ultra clean even on 4GbFC. They're transmitting with roughly 2mW (3dBm) whereas the others are only at 1.5mW (1.5dBm) although that should really be enough. The other fabric (where the lasers are apparently ok) still produces one or two CRCs infrequently. Using sfpshow the SFP producing the actual RX errors shows Status/Ctrl: 0x82 Alarm flags[0,1] = 0x5, 0x40 Warn Flags[0,1] = 0x5, 0x40 Now I'll have to find out what that means. Not sure if it was there before. Well I'll first clear my head with a week of vacation. 8-)

    Read the article

  • error: ‘struct mcontext_t’ has no member named ‘eip’

    - by user353573
    original is struct sigcontext *sc; after changing to struct mcontext_t, error occur. How to fix it? error: ‘struct mcontext_t’ has no member named ‘eip’ #include <stdio.h> #include <signal.h> #include <asm/ucontext.h> static unsigned long target; void handler(int signum, siginfo_t *siginfo, void *uc0){ struct ucontext *uc; mcontext_t *sc; uc = (struct ucontext *)uc0; sc = &uc->uc_mcontext; sc->eip = target; }

    Read the article

  • svn commit is hung at start of commit

    - by jwhitlock
    I'm commiting a large changeset, including a large binary file (180 MB) over a slow VPN connection. It looks for all the world like it is stalled. How can I diagnose where it is stuck? The output is: $ svn commit -m "My commit message" Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)` Local subversion is 1.6.9 on Linux, KDE 4.3, and svn status shows ML . L ws M ws/manage.py L ws/locales L ws/locales/ja_JP L ws/locales/ja_JP/LC_MESSAGES The process isn't using much of any resources. The server is Linux, served by Apache and mod_dav_svn, same subversion 1.6.9. I can't see any process that is handling the commit.

    Read the article

  • Django: How to create a model dynamically just for testing

    - by muhuk
    I have a Django app that requires a settings attribute in the form of: RELATED_MODELS = ('appname1.modelname1.attribute1', 'appname1.modelname2.attribute2', 'appname2.modelname3.attribute3', ...) Then hooks their post_save signal to update some other fixed model depending on the attributeN defined. I would like to test this behaviour and tests should work even if this app is the only one in the project (except for its own dependencies, no other wrapper app need to be installed). How can I create and attach/register/activate mock models just for the test database? (or is it possible at all?) Solutions that allow me to use test fixtures would be great.

    Read the article

  • Using Unix Process Controll Methods in Ruby

    - by John F. Miller
    Ryan Tomayko touched off quite a fire storm with this post about using Unix process control commands. We should be doing more of this. A lot more of this. I'm talking about fork(2), execve(2), pipe(2), socketpair(2), select(2), kill(2), sigaction(2), and so on and so forth. These are our friends. They want so badly just to help us. I have a bit of code (a delayed_job clone for DataMapper that I think would fit right in with this, but I'm not clear on how to take advantage of the listed commands. Any Ideas on how to improve this code? def start say "*** Starting job worker #{@name}" t = Thread.new do loop do delay = Update.work_off(self) break if $exit sleep delay break if $exit end clear_locks end trap('TERM') { terminate_with t } trap('INT') { terminate_with t } trap('USR1') do say "Wakeup Signal Caught" t.run end end

    Read the article

  • compile Boost as static Universal binary lib

    - by Albert
    I want to have a static Universal binary lib of Boost. (Preferable the latest stable version, that is 1.43.0, or newer.) I found many Google hits with similar problems and possible solutions. However, most of them seems outdated. Also none of them really worked. Right now, I am trying sudo ./bjam --toolset=darwin --link=static --threading=multi \ --architecture=combined --address-model=32_64 \ --macosx-version=10.4 --macosx-version-min=10.4 \ install That compiles and install fine. However, the produced binaries seems broken. az@ip245 47 (openlierox) %file /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a: current ar archive random library az@ip245 49 (openlierox) %lipo -info /usr/local/lib/libboost_signals.a input file /usr/local/lib/libboost_signals.a is not a fat file Non-fat file: /usr/local/lib/libboost_signals.a is architecture: x86_64 az@ip245 48 (openlierox) %otool -hv /usr/local/lib/libboost_signals.a Archive : /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a(trackable.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1536 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(connection.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(named_slot_map.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1856 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(signal_base.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(slot.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1616 SUBSECTIONS_VIA_SYMBOLS Any suggestion how to get that correct?

    Read the article

  • What are the prerequisites for learning embedded systems programming ?

    - by WarDoGG
    I have completed my graduation in Computer engineering. We had some basic electronics courses in Digital signal processing, Information theory etc but my primary field is Programming. However, i was looking to get into Embedded sytems programming with NO knowledge of how it is done. However, i am very keen on going into this field. My questions : what are the languages used to program embedded system programs ? Will i be able to learn without having any basics in electronics ? any other prerequisites that i should know ?

    Read the article

  • using performSelector

    - by zebra
    Hi all, i've my method that implement a reverseGeocoder - (void)reversing { geoCoder=[[MKReverseGeocoder alloc] initWithCoordinate:locManager.location.coordinate]; geoCoder.delegate=self; [geoCoder start]; } i recall reversing in another method with this: [self performSelector:@selector(reversing) withObject:nil afterDelay:10]; and i receive 2010-04-30 17:44:17.616 high[1167:207] Retrive City Milano 2010-04-30 17:44:17.628 high[1167:207] geocoder released 2010-04-30 17:44:18.723 high[1167:207] Error Domain=MKErrorDomain Code=4 "Operation could not be completed. (MKErrorDomain error 4.)" Program received signal: “EXC_BAD_ACCESS”. Can someone help me? :D

    Read the article

  • Variable Assignment and loops (Java)

    - by Raven Dreamer
    Greetings Stack Overflowers, A while back, I was working on a program that hashed values into a hashtable (I don't remember the specifics, and the specifics themselves are irrelevant to the question at hand). Anyway, I had the following code as part of a "recordInput" method. tempElement = new hashElement(someInt); while(in.hasNext() == true) { int firstVal = in.nextInt(); if (firstVal == -911) { break; } tempElement.setKeyValue(firstVal, 0); for(int i = 1; i<numKeyValues;i++) { tempElement.setKeyValue(in.nextInt(), i); } elementArray[placeValue] = tempElement; placeValue++; } // close while loop } // close method This part of the code was giving me a very nasty bug -- no matter how I finagled it, no matter what input I gave the program, it would always produce an array full of only a single value -- the last one. The problem, as I later determined it, was that because I had not created the tempElement variable within the loop, and because values were not being assigned to elementArray[] until after the loop had ended -- every term was defined rather as "tempElement" -- when the loop terminated, every slot in the array was filled with the last value tempElement had taken. I was able to fix this bug by moving the declaration of tempElement within the while loop. My question to you, Stackoverflow, is whether there is another (read: better) way to avoid this bug while keeping the variable declaration of tempElement outside the while loop. (suggestions for better title and tags also appreciated)

    Read the article

  • Malloc to a CGPoint Pointer throwing EXC_BAD_ACCESS when accessing

    - by kdbdallas
    I am trying to use a snippet of code from a Apple programming guide, and I am getting a EXC_BAD_ACCESS when trying to pass a pointer to a function, right after doing a malloc. (For Reference: iPhone Application Programming Guide: Event Handling - Listing 3-6) The code in question is really simple: CFMutableDictionaryRef touchBeginPoints; UITouch *touch; .... CGPoint *point = (CGPoint *)CFDictionaryGetValue(touchBeginPoints, touch); if (point == NULL) { point = (CGPoint *)malloc(sizeof(CGPoint)); CFDictionarySetValue(touchBeginPoints, touch, point); } Now when the program goes into the if statement it assigns the 'output' of malloc into the point variable/pointer. Then when it tries to pass point into the CFDictionarySetValue function it crashes the application with: Program received signal: “EXC_BAD_ACCESS”. Someone suggested not doing the malloc and pass the point var/pointer as: &point, however that still gave me a EXC_BAD_ACCESS. What I am (and it looks like Apple) doing wrong??? Thanks in advance.

    Read the article

  • How to associate Wi-Fi beacon info with a virtual "location"?

    - by leander
    We have a piece of embedded hardware that will sense 802.11 beacons, and we're using this to make a map of currently visible bssid -> signalStrength. Given this map, we would like to make a determination: Is this likely to be a location I have been to before? If so, what is its ID? If not, I should remember this location: generate a new ID. Now what should I store (and how should I store it) to make future determinations easier? This is for an augmented-reality app/game. We will be using it to associate particular characters and events with "locations". The device does not have internet or cellular access, so using a geolocation service is out of consideration for the time being. (We don't really need to know where we are in reality, just be able to determine if we return there.) It isn't crucial that it be extremely accurate, but it would be nice if it was tolerant to signal strength changes or the occasional missing beacon. It should be usable in relatively low numbers of access points (e.g. rural house with one wireless router) or many (wandering around a dense metropolis). In the case of a city, it should change location every few minutes of walking (continuously-overlapping signals make this a bit more tricky in naive code). A reasonable number of false positives (match a location when we aren't actually there) is acceptable. The wrong character/event showing up just adds a bit of variety. False negatives (no location match) are a bit more troublesome: this will tend to add a better-matching new location to the saved locations, masking the old one. While we will have additional logic to ensure locations that the device hasn't seen in a while will "orphan" any associated characters or events (if e.g. you move to a different country), we'd prefer not to mask and eventually orphan locations you do visit regularly. Some technical complications: signalStrength is returned as 1-4; presumably it's related to dB, but we are not sure exactly how; in my experiments it tends to stick to either 1 or 4, but occasionally we see numbers in between. (Tech docs on the hardware are sparse.) The device completes a scan of one-quarter of the channel space every second; so it takes about 4-5 seconds to get a complete picture of what's around. The list isn't always complete. (We are making strides to fix this using some slight sampling period randomization, as recommended by the library docs. We're also investigating ways to increase the number of scans without killing our performance; the hardware/libs are poorly behaved when it comes to saturating the bus.) We have only kilobytes to store our history. We have a "working" impl now, but it is relatively naive, and flaky in the face of real-world Wi-Fi behavior. Rough pseudocode: // recordLocation() -- only store strength 4 locations m_savedLocations[g_nextId++] = filterForStrengthGE( m_currentAPs, 4 ); // determineLocation() bestPoints = -inf; foreach ( oldLoc in m_savedLocations ) { points = 0.0; foreach ( ap in m_currentAPs ) { if ( oldLoc.has( ap ) ) { switch ( ap.signalStrength ) { case 3: points += 1.0; break; case 4: points += 2.0; break; } } } points /= oldLoc.numAPs; if ( points > bestPoints ) { bestLoc = oldLoc; bestPoints = points; } } if ( bestLoc && bestPoints > 1.0 ) { if ( bestPoints >= (2.0 - epsilon) ) { // near-perfect match. // update location with any new high-strength APs that have appeared bestLoc.addAPs( filterForStrengthGE( m_currentAPs, 4 ) ); } return bestLoc; } else { return NO_MATCH; } We record a location currently only when we have NO_MATCH and the app determines it's time for a new event. (The "near-perfect match" code above would appear to make it harder to match in the future... It's mostly to keep new powerful APs from being associated with other locations, but you'd think we'd need something to counter this if e.g. an AP doesn't show up in the next 10 times I match a location.) I have a feeling that we're missing some things from set theory or graph theory that would assist in grouping/classification of this data, and perhaps providing a better "confidence level" on matches, and better robustness against missed beacons, signal strength changes, and the like. Also it would be useful to have a good method for mutating locations over time. Any useful resources out there for this sort of thing? Simple and/or robust approaches we're missing?

    Read the article

  • add row to a BindingSource gives different autoincrement value from whats saved into DB

    - by Ruben Trancoso
    I have a DataGridView that shows list of records and when I hit a insert button, a form should add a new record, edit its values and save it. I have a BindingSource bound to a DataGridView. I pass is as a parameter to a NEW RECORD form so // When the form opens it add a new row and de DataGridView display this new record at this time DataRowView currentRow; currentRow = (DataRowView) myBindindSource.AddNew(); when user confirm to save it I do a myBindindSource.EndEdit(); // inside the form and after the form is disposed the new row is saved and the bindingsorce position is updated to the new row DataRowView drv = myForm.CurrentRow; avaliadoTableAdapter.Update(drv.Row); avaliadoBindingSource.Position = avaliadoBindingSource.Find("ID", drv.Row.ItemArray[0]); The problem is that this table has a AUTOINCREMENT field and the value saved may not correspond the the value the bindingSource gives in EDIT TIME. So, when I close and open the DataGridView again the new rowd give its ID based on the available slot in the undelying DB at the momment is was saved and it just ignores the value the BindingSource generated ad EDIT TIME, Since the value given by the binding source should be used by another table as a foreingKey it make the reference insconsistent. There's a way to get the real ID was saved to the database?

    Read the article

  • Determining the magnitude of a certain frequency on the iPhone

    - by eagle
    I'm wondering what's the easiest/best way to determine the magnitude of a given frequency in a sound. It's my understanding that a FFT function will return the magnitudes of all frequencies in a signal. I'm wondering if there is any shortcut I could use if I'm only concerned about a specific frequency. I'll be using the iPhone mic to record the audio. My guess is that I'll be using the Audio Queue Services for recording since I don't need to record the audio to a file. I'm using SDK 4.0, so I can use any of the functions defined in the Accelerate framework (e.g. FFT functions) if needed. Update: I updated the question to be more clear as per Conrad's suggestion.

    Read the article

  • Why is my ipad's wireless so flakey?

    - by Mark
    I'm the proud owner of a new IPad here in the UK. All is good, except for the wifi, which is a bit flakey. It connects fine to my Draytek router which is set for WPA/WPA2 and 56g only, displaying full signal strength. Then, after a few minutes, it goes down to minimum strength... And sometimes it goes back up again. A few times, it seems to loose connection completely, and needs to be turned off and on again. I've looked at the Apple support site, and have tried their recommendations (which are not really very relevant), but still nothing. I've tried setting the router to wpa2 only, and setting long-preamble. Right now, I guess I want to know if it's a hardware problem with my device and should be returned, or if it's a problem with all ipads which will be resolved. Guess I could take it back to the Mac genius bar, but I find those guys so incredibly pretentious and, frankly, rather useless, that i'd rather wait until I've exercised other options!

    Read the article

  • Multiple Context menus in PyQt based on mouse location

    - by Nader
    I have a window with multiple tables using QTableWidget (PyQt). I created a popup menu using the right click mouse and it works fine. However, I need to create different popup menu based on which table the mouse is hovering over at the time the right mouse is clicked. How can I get the mouse to tell me which table it is hovering over? or, put in another way, how to implement a method so as to have a specific context menu based on mouse location? I am using Python and PyQt. My popup menu is developed similar to this code (PedroMorgan answer from Qt and context menu): class Foo( QtGui.QWidget ): def __init__(self): QtGui.QWidget.__init__(self, None) # Toolbar toolbar = QtGui.QToolBar() # Actions self.actionAdd = toolbar.addAction("New", self.on_action_add) self.actionEdit = toolbar.addAction("Edit", self.on_action_edit) self.actionDelete = toolbar.addAction("Delete", self.on_action_delete) # Tree self.tree = QtGui.QTreeView() self.tree.setContextMenuPolicy( Qt.CustomContextMenu ) self.connect(self.tree, QtCore.SIGNAL('customContextMenuRequested(const QPoint&)'), self.on_context_menu) # Popup Menu self.popMenu = QtGui.QMenu( self ) self.popMenu.addAction( self.actionEdit ) self.popMenu.addAction( self.actionDelete ) self.popMenu.addSeparator() self.popMenu.addAction( self.actionAdd ) def on_context_menu(self, point): self.popMenu.exec_( self.tree.mapToGlobal(point) )

    Read the article

  • How can I set QNetworkReply properties to get correct NCBI pages?

    - by Claire Huang
    I try to get this following url using the downloadURL function: http://www.ncbi.nlm.nih.gov/nuccore/27884304 But the data is not as what we can see through the browser. Now I know it's because that I need to give the correct information such as browser, how can I know what kind of information I need to set, and how can I set it? (By setHeader function??) In VC++, we can use CInternetSession and CHttpConnection Object to get the correct information without setting any other detail information, is there any similar way in Qt or other cross-platform C++ network lib?? (Yes, I need the the cross-platform property.) QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); request.setHeader(QNetworkRequest::ContentTypeHeader ,"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)"); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); int direction; QVariant statusCodeV = reply->attribute(QNetworkRequest::RedirectionTargetAttribute); QUrl redirectTo = statusCodeV.toUrl(); if (!redirectTo.isEmpty()) { if (redirectTo.host().isEmpty()) { const QByteArray newaddr = ("http://"+url.host()+redirectTo.encodedPath()).toAscii(); redirectTo.setEncodedUrl(newaddr); redirectTo.setHost(url.host()); } return (downloadURL(redirectTo, data)); } if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; }

    Read the article

  • GCC destructor behaviour

    - by joveha
    I've noticed a difference in behaviour for gcc's destructor when compiled under linux and crosscompiled with mingw. On linux the destructor will not get called unless the program terminates normally by itself (returns from main). I guess that kind of makes sense if you take signal handlers into account. On Win32 however, the destructor is called if the program is terminated by say a CTRL-C, but not when killed from the Task Manager. Why is this? And what would you suggest to make the destructor get called no matter how the process terminates - on Win32 in particular? Example code: #include <stdio.h> int main(int argc, char **argv) { printf("main\n"); while(1) {} return 0; } __attribute__((destructor)) static void mydestructor(void) { printf("destructor\n"); }

    Read the article

  • Can we represent bit fields in JSON/BSON?

    - by zubair
    We have a dozen simulators talking to each other on UDP. The interface definition is managed in a database. The simulators are written using different languages; mostly C++, some in Java and C#. Currently, when systems engineer makes changes in the interface definition database, simulator developers manually update the communication data structures in their code. The data is mostly 2-5 bytes with bit fields for each signal. What I want to do is to generate one file from interface definition database describing byte and bit field definitions and let each developer add it to his simulator code with minimal fuss. I looked at JSON/BSON but couldn't find a way to represent bit fields in it. Thanks Zubair

    Read the article

  • Graphics glitch when drawing to a Cairo context obtained from a gtk.DrawingArea inside a gtk.Viewport.

    - by user410023
    I am trying to redraw the part of the DrawingArea that is visible in the Viewport in the expose-event handler. However, it seems that I am doing something wrong with the coordinates that are passed to the event handler because there is garbage at the edge of the Viewport when scrolling. Can anyone tell what I am doing wrong? Here is a small example: import pygtk pygtk.require("2.0") import gtk from numpy import array from math import pi class Circle(object): def init(self, position = [0., 0.], radius = 0., edge = (0., 0., 0.), fill = None): self.position = position self.radius = radius self.edge = edge self.fill = fill def draw(self, ctx): rect = array(ctx.clip_extents()) rect[2] -= rect[0] rect[3] -= rect[1] center = rect[2:4] / 2 ctx.arc(center[0], center[1], self.radius, 0., 2. * pi) if self.fill != None: ctx.set_source_rgb(*self.fill) ctx.fill_preserve() ctx.set_source_rgb(*self.edge) ctx.stroke() class Scene(object): class Proxy(object): directory = {} def init(self, target, layers = set()): self.target = target self.layers = layers Scene.Proxy.directory[target] = self def __init__(self, viewport): self.objects = {} self.layers = [set()] self.viewport = viewport self.signals = {} def draw(self, ctx): x = self.viewport.get_hadjustment().value y = self.viewport.get_vadjustment().value ctx.set_source_rgb(1., 1., 1.) ctx.paint() ctx.translate(x, y) for obj in self: obj.draw(ctx) def add(self, item, layer = 0): item = Scene.Proxy(item, layers = set((layer,))) assert(hasattr(item.target, "draw")) assert(isinstance(layer, int)) item.layers.add(layer) while not layer < len(self.layers): self.layers.append(set()) self.layers[layer].add(item) if not item in self.objects: self.objects[item] = set() self.objects[item].add(layer) def remove(self, item, layers = None): item = Scene.Proxy.directory[item] if layers == None: layers = self.objects[item] for layer in layers: layer.remove(item) item.layers.remove(layer) if len(item.layers) == 0: self.objects.remove(item) def __iter__(self): for layer in self.layers: for item in layer: yield item.target class App(object): def init(self): signals = { "canvas_exposed": self.update_canvas, "gtk_main_quit": gtk.main_quit } self.builder = gtk.Builder() self.builder.add_from_file("graphics_glitch.glade") self.window = self.builder.get_object("window") self.viewport = self.builder.get_object("viewport") self.canvas = self.builder.get_object("canvas") self.scene = Scene(self.viewport) signals.update(self.scene.signals) self.builder.connect_signals(signals) self.window.show() def update_canvas(self, widget, event): ctx = self.canvas.window.cairo_create() self.scene.draw(ctx) ctx.clip() if name == "main": app = App() scene = app.scene scene.add(Circle((0., 0.), 10.)) gtk.main() And the Glade file "graphics_glitch.glade": <?xml version="1.0"?> <interface> <requires lib="gtk+" version="2.16"/> <!-- interface-naming-policy project-wide --> <object class="GtkWindow" id="window"> <property name="width_request">200</property> <property name="height_request">200</property> <property name="visible">True</property> <signal name="destroy" handler="gtk_main_quit"/> <child> <object class="GtkScrolledWindow" id="scrolledwindow1"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="hadjustment">h_adjust</property> <property name="vadjustment">v_adjust</property> <property name="hscrollbar_policy">automatic</property> <property name="vscrollbar_policy">automatic</property> <child> <object class="GtkViewport" id="viewport"> <property name="visible">True</property> <property name="resize_mode">queue</property> <child> <object class="GtkDrawingArea" id="canvas"> <property name="width_request">640</property> <property name="height_request">480</property> <property name="visible">True</property> <signal name="expose_event" handler="canvas_exposed"/> </object> </child> </object> </child> </object> </child> </object> <object class="GtkAdjustment" id="h_adjust"> <property name="lower">-1000</property> <property name="upper">1000</property> <property name="step_increment">1</property> <property name="page_increment">25</property> <property name="page_size">25</property> </object> <object class="GtkAdjustment" id="v_adjust"> <property name="lower">-1000</property> <property name="upper">1000</property> <property name="step_increment">1</property> <property name="page_increment">25</property> <property name="page_size">25</property> </object> </interface> Thanks! --Dan

    Read the article

  • what is the idea behind scaling an image using lanczos?

    - by banister
    Hi, I'm interested in image scaling algorithms and have implemented the bilinear and bicubic methods. However, I have heard of the lanczos and other more sophisticated methods for even higher quality image scaling and I am very curious how they work. Could someone here explain the basic idea behind scaling an image using lanczos (both upscaling and downscaling) and why it results in higher quality? I do have a background in fourier analysis and have done some signal processing stuff in the past, but not with relation to image processing, so don't be afraid to use terms like "frequency response" and such in your answer :) EDIT: I guess what i really want to know is the concept and theory behind using a convolution filter for interpolation. (Note: i have already read the wikipedia article on lanczos resampling but it didn't have nearly enough detail for me) thanks alot!

    Read the article

  • iPhone app crashed: Assertion failed function evict_glyph_entry_from_strike, file Fonts/CGFontCache.

    - by Ross
    this happened quite randomly. I didn't delete any tableview cell, the backtrace information: Assertion failed: (d->entry[identifier.glyph] == g), function evict_glyph_entry_from_strike, file Fonts/CGFontCache.c, line 810. Program received signal: “SIGABRT”. (gdb) bt #0 0x97da5972 in __kill () #1 0x97da5964 in kill$UNIX2003 () #2 0x97e38ba5 in raise () #3 0x97e4ec5c in abort () #4 0x97e3b804 in __assert_rtn () #5 0x0037fe0e in evict_glyph_entry_from_cache () #6 0x003226aa in expire_glyphs_nl () #7 0x00322645 in CGFontCacheUnlock () #8 0x00321fef in CGGlyphLockUnlock () #9 0x0240f9b7 in ripc_DrawGlyphs () #10 0x0031b0d4 in draw_glyphs () #11 0x0031a91f in CGContextShowGlyphsWithAdvances () #12 0x35814178 in WebCore::Font::drawGlyphs () #13 0x35813da5 in WebCore::Font::drawGlyphBuffer () #14 0x35813aca in WebCore::Font::drawSimpleText () #15 0x35813760 in drawAtPoint () #16 0x3581307e in -[NSString(WebStringDrawing) _web_drawAtPoint:forWidth:withFont:ellipsis:letterSpacing:includeEmoji:] () #17 0x3090d2e9 in -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:letterSpacing:includeEmoji:] () #18 0x3090cfe3 in -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:] () #19 0x3093d853 in -[UINavigationItemView drawText:inRect:] () #20 0x3093a96b in -[UINavigationItemButtonView drawRect:] () #21 0x3091ff61 in -[UIView(CALayerDelegate) drawLayer:inContext:] () #22 0x0060daeb in -[CALayer drawInContext:] () #23 0x0060d8f9 in backing_callback () #24 0x0060d1b4 in CABackingStoreUpdate () #25 0x0060c3cc in -[CALayer _display] () #26 0x0060bf56 in CALayerDisplayIfNeeded () #27 0x0060b3bd in CA::Context::commit_transaction () #28 0x0060b022 in CA::Transaction::commit () #29 0x006132e0 in CA::Transaction::observer_callback () #30 0x30245c32 in __CFRunLoopDoObservers () #31 0x3024503f in CFRunLoopRunSpecific () #32 0x30244628 in CFRunLoopRunInMode () #33 0x32044c31 in GSEventRunModal () #34 0x32044cf6 in GSEventRun () #35 0x309021ee in UIApplicationMain ()

    Read the article

  • iPad crashes that aren't happening on iPhone or iPod Touch

    - by alyoshak
    Has anyone had difficulty getting what has otherwise been a solid iPhone app working on the iPad? I was under the impression that iPhone apps would run without problems on the iPad. We are are experiencing crashes (not intermittent - same place, at same time) that we've never gotten on the iPhone or iPod Touch. I have become suspicious that the crashes are memory-management related, but even if so, why only on the iPad? 2010-05-17 10:19:06.474 ASSIST[82:207] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<UISectionRowData 0x6041480> valueForUndefinedKey:]: this class is not key value coding-compliant for the key deliveryDate.' 2010-05-17 10:19:06.481 ASSIST[82:207] Stack: ( 852041337, 861292157, 852040861, 850755255, 850750995, 850758945, 81279, 123007, 126693, 149141, 851599725, 827486573, 827486477, 827486431, 827485745, 827487359, 827454123, 851903137, 851590065, 851588321, 819339483, 819339655, 827151561, 827144691, 9461, 9324 ) terminate called after throwing an instance of 'NSException' Program received signal: “SIGABRT”.

    Read the article

  • How to preserve data integrity while minimizing the transmission size

    - by user1500578
    we have sensors in the wild that send their data to a server every day via TCP/IP, either through 3G or through satellite for the physical layer. The sensors can automatically switch from one to the other depending on their location and the quality of the signal with the local 3G operator. Given that the 3G and satellite communications are very expensive, we want to minimize the amount of data to send. But also, we want to protect ourselves from lost data. What would be the best strategy to ensure with reasonable certainty that the integrity of our data is preserved, while minimizing the amount of redundancy, i.e the amount of data transmitted ? I've read about the zfec codec, but I'm not sure if we need to transmit all the chunks, or if we need to send a hash code along each chunk.

    Read the article

  • C lang. -- Error: Segmentaion fault

    - by user233542
    I don't understand why this would give me a seg fault. Any ideas? this is the function that returns the signal to stop the program: (below is the other function that is called within this) double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } double shoot(double Sol[N],double A,double c) { int i,j; /Initial Conditions/ for (i=0;i for (i=buff+2;i return Sol[i-1]; } buff, l, N are defined using a #deine statement. l = 401, buff = 50, N = 2000 Thanks

    Read the article

  • Common lisp, CFFI, and instantiating c structs

    - by andrew
    Hi, I've been on google for about, oh, 3 hours looking for a solution to this "problem." I'm trying to figure out how to instantiate a C structure in lisp using CFFI. I have a struct in c: struct cpVect{cpFloat x,y;} Simple right? I have auto-generated CFFI bindings (swig, I think) to this struct: (cffi:defcstruct #.(chipmunk-lispify "cpVect" 'classname) (#.(chipmunk-lispify "x" 'slotname) :double) (#.(chipmunk-lispify "y" 'slotname) :double)) This generates a struct "VECT" with slots :X and :Y, which foreign-slot-names confirms (please note that I neither generated the bindings or programmed the C library (chipmunk physics), but the actual functions are being called from lisp just fine). I've searched far and wide, and maybe I've seen it 100 times and glossed over it, but I cannot figure out how to create a instance of cpVect in lisp to use in other functions. Note the function: cpShape *cpPolyShapeNew(cpBody *body, int numVerts, cpVect *verts, cpVect offset) Takes not only a cpVect, but also a pointer to a set of cpVects, which brings me to my second question: how do I create a pointer to a set of structs? I've been to http://common-lisp.net/project/cffi/manual/html_node/defcstruct.html and tried the code, but get "Error: Unbound variable: PTR" (I'm in Clozure CL), not to mention that looks to only return a pointer, not an instance. I'm new to lisp, been going pretty strong so far, but this is the first real problem I've hit that I can't figure out. Thanks!

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >