Search Results

Search found 324 results on 13 pages for 'robin agrahari'.

Page 7/13 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Why can't I extract a C++ type from a Python type using boost::python::extractor?

    - by Robin
    I've wrapped a C++ class using Py++ and everything is working great in Python. I can instantiate the c++ class, call methods, etc. I'm now trying to embed some Python into a C++ application. This is also working fine for the most-part. I can call functions on a Python module, get return values, etc. The python code I'm calling returns one of the classes that I wrapped: import _myextension as myext def run_script(arg): my_cpp_class = myext.MyClass() return my_cpp_class I'm calling this function from C++ like this: // ... excluding error checking, ref counting, etc. for brevity ... PyObject *pModule, *pFunc, *pArgs, *pReturnValue; Py_Initialize(); pModule = PyImport_Import(PyString_FromString("cpp_interface")); pFunc = PyObject_GetAttrString(pModule, "run_script"); pArgs = PyTuple_New(1); PyTuple_SetItem(pArgs, 0, PyString_FromString("an arg")); pReturnValue = PyObject_CallObject(pFunc, pArgs); bp::extract< MyClass& > extractor(pReturnValue); // PROBLEM IS HERE if (extractor.check()) { // This check is always false MyClass& cls = extractor(); } The problem is the extractor never actually extracts/converts the PyObject* to MyClass (i.e. extractor.check() is always false). According to the docs this is the correct way to extract a wrapped C++ class. I've tried returning basic data types (ints/floats/dicts) from the Python function and all of them are extracted properly. Is there something I'm missing? Is there another way to get the data and cast to MyClass?

    Read the article

  • What is the code for a number sign in sharepoint?

    - by Robin
    I've created a link in Sharepoint using the Content Editor Web Part. My link uses html to open up an email with the fields to, cc, subject, and body filled out. However, in the cc section I need a number sign (#) for a mailbox. When I use the html code & #35;(minus the space) my entire code crumbles. Everything starting at the "&" disappears. However, just for kicks I tried & pound;(minus the space) and the code works for a european pound sign. Not sure what else to try. Any suggestions?

    Read the article

  • iPhone, how to dyanmic create new entity (table) via core data model?

    - by Robin
    as topic, I just want create new Entity(table) in sqlite , my code as follows: +(BOOL)CreateDataSet:(NSManagedObjectModel *) model attributes:(NSDictionary*)attributes entityName:(NSString*) entityName { NSEntityDescription *entityDef = [[NSEntityDescription alloc] init]; [entityDef setName:entityName]; [entityDef setManagedObjectClassName:entityName]; [model setEntities:[NSArray arrayWithObject:entityDef]]; //@step NSArray *properties = [CoreDataHelper CreateAttributes:attributes]; [entityDef setProperties:properties]; [entityDef release]; return TRUE; } but it throw errors: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Can't modify an immutable model.' * Call stack at first throw: ( 0 CoreFoundation 0x01c5abe9 __exceptionPreprocess + 185 1 libobjc.A.dylib 0x01daf5c2 objc_exception_throw + 47 2 CoreData 0x0152634a -[NSManagedObjectModel(_NSInternalMethods) _throwIfNotEditable] + 106 3 CoreData 0x01526904 -[NSManagedObjectModel setEntities:] + 36 .... that seem show the model is read only..... anyone can help on this problem? thanks

    Read the article

  • iPhone , core data, whether NSManagedObject use lazy load mechanism when it was create ?

    - by Robin
    Hi, all, I have use core data in app, I have definite a class that most like as follows: @interface Master : NSManagedObject { } @property (nonatomic, retain) NSSet* Details; .... the entity Master contains a property 'Details' that is relate to another table, this is typical Master-Details relationship, I trace the app , but I find a issue that the property 'Details' value was construct even it never be invoked ..... but I consider that the core data 'should' use some lazy mechanism to improve performance, or maybe I miss some configure step ? because the Master entity contains at least five 'Child' table properties , I have to consider this problem before use the core data .... any help ? thanks for your time!

    Read the article

  • Persistence scheme & state data for low memory situations (iphone)

    - by Robin Jamieson
    What happens to state information held by a class's variable after coming back from a low memory situation? I know that views will get unloaded and then reloaded later but what about some ancillary classes & data held in them that's used by the controller that launched the view? Sample scenario in question: @interface MyCustomController: UIViewController { ServiceAuthenticator *authenticator; } -(id)initWithAuthenticator:(ServiceAuthenticator *)auth; // the user may press a button that will cause the authenticator // to post some data to the service. -(IBAction)doStuffButtonPressed:(id)sender; @end @interface ServiceAuthenticator { BOOL hasValidCredentials; // YES if user's credentials have been validated NSString *username; NSString *password; // password is not stored in plain text } -(id)initWithUserCredentials:(NSString *)username password:(NSString *)aPassword; -(void)postData:(NSString *)data; @end The app delegate creates the ServiceAuthenticator class with some user data (read from plist file) and the class logs the user with the remote service. inside MyAppDelegate's applicationDidFinishLaunching: - (void)applicationDidFinishLaunching:(UIApplication *)application { ServiceAuthenticator *auth = [[ServiceAuthenticator alloc] initWithUserCredentials:username password:userPassword]; MyCustomController *controller = [[MyCustomController alloc] initWithNibName:...]; controller.authenticator = auth; // Configure and show the window [window addSubview:..]; // make everything visible [window makeKeyAndVisible]; } Then whenever the user presses a certain button, 'MyCustomController's doStuffButtonPressed' is invoked. -(IBAction)doStuffButtonPressed:(id)sender { [authenticator postData:someDataFromSender]; } The authenticator in-turn checks to if the user is logged in (BOOL variable indicates login state) and if so, exchanges data with the remote service. The ServiceAuthenticator is the kind of class that validates the user's credentials only once and all subsequent calls to the object will be to postData. Once a low memory scenario occurs and the associated nib & MyCustomController will get unloaded -- when it's reloaded, what's the process for resetting up the 'ServiceAuthenticator' class & its former state? I'm periodically persisting all of the data in my actual model classes. Should I consider also persisting the state data in these utility style classes? Is that the pattern to follow?

    Read the article

  • Using R to open grib files

    - by robin girard
    I am using R to work with meteorological data. I proceed in two steps: 1- convert grib to netcdf using the commande line function ncl_convert2nc from ncar command language 2- use package ncdf in R to import the netcdf data. I have two problems: 1- I would like to do automatic treatment of many files within R. Can I call ncl_convert2nc within R ? 2- For some particular grib files, the convertion with ncar tool does not work. Is there other ways or trick (other than transcription into netcdf) to read grib files in R ?

    Read the article

  • How to parse a URL in C?

    - by Robin
    Hi, Im wondering how to parse a url into an URL object(of some kind) in C. So that I would be able to extract key/val objects from a querystring. Have looked at: http://stackoverflow.com/questions/726122/best-ways-of-parsing-a-url-using-c And several other resources, even Google Code, but haven't found anything in my taste.. And no, using sscanf, and regex is not an alternative. Saying, unless I have to write my own parser.. Would be greatful for any tips or help on where I could find this!

    Read the article

  • streaming to correct network interface

    - by robin hood
    I have IP cam that supports RTSP streaming. It's connected to router with 2 network cards with IP1 and IP2 addresses. I make 2 connections to IP cam by IP1 and IP2 addresses from the same IP and I need to receive corresponding streams thru correct network card, but both streams (RTP over UDP) go thru IP1. How this can be resolved? I don't know if RTSP server binds UDP sockets to corresponding IP and I don't know what IP stack is in IP cam (weak end system or strong end system). I haven't found anything interesting in router configuration. As I understand, routing table cannot help me cos I'm connected from the same IP, is it right? Also Sorry for incomplete info but it's all I have at the moment. Thanks for your time.

    Read the article

  • JQuery image gallery non functional fade effects

    - by Robin Knight
    Here is a simple image gallery script for fading in and out divs with background images. It is slow and not working properly. It would appear all images are appearing and disappearing together without any animation This gallery should fade each image out into the next one function gallery() { timerp = window.setInterval(function() { $('.cornerimg').fadeOut(2000); if ($('.cornerimg:visible') == $('.cornerimg').last()) { $('.cornerimg').first().fadeIn(2000); } else { $('.cornerimg').next().fadeIn(2000); }; }, 6000); } } Any ideas what has gone wrong with it?

    Read the article

  • Data binding in gridview column using eval

    - by ROBIN
    I hava grid containing a column for displaying countrynames. I need to display value in that column as contrycode-first 10 letters of country name (in-India) .I tried it using Eval functions with in the item template: <asp:TemplateField> <ItemTemplate> <asp:Label ID="CountryNameLabe" runat="server" Text='<%# Eval("CorporateAddressCountry").SubString(0,6) %>' ></asp:Label> </ItemTemplate> </asp:TemplateField> But it shows error. Can i use custom functions in eval? please help

    Read the article

  • how to configure IP cam to stream using right network card?

    - by robin hood
    I have IP cam that supports RTSP streaming. It's connected to router with 2 network cards with IP1 and IP2 addresses. I make 2 connections to IP cam by IP1 and IP2 addresses from the same IP and I need to receive corresponding streams thru correct network card, but both streams (RTP over UDP) go thru IP1. How this can be resolved? I don't know if RTSP server binds UDP sockets to corresponding IP and I don't know what IP stack is in IP cam (weak end system or strong end system). I haven't found anything interesting in router configuration. As I understand, routing table cannot help me cos I'm connected from the same IP, is it right? Also Sorry for incomplete info but it's all I have at the moment. Thanks for your time.

    Read the article

  • Find all those columns which have only null values, in a MySQL table

    - by Robin v. G.
    The situation is as follows: I have a substantial number of tables, with each a substantial number of columns. I need to deal with this old and to-be-deprecated database for a new system, and I'm looking for a way to eliminate all columns that have - apparently - never been in use. I wanna do this by filtering out all columns that have a value on any given row, leaving me with a set of columns where the value is NULL in all rows. Of course I could manually sort every column descending, but that'd take too long as I'm dealing with loads of tables and columns. I estimate it to be 400 tables with up to 50 (!) columns per table. Is there any way I can get this information from the information_schema? EDIT: Here's an example: column_a column_b column_c column_d NULL NULL NULL 1 NULL 1 NULL 1 NULL 1 NULL NULL NULL NULL NULL NULL The output should be 'column_a' and 'column_c', for being the only columns without any filled in values.

    Read the article

  • int[] to string c#

    - by Robin Webdev
    Hi I'm developing an client application in C# and the server is written in c++ the server uses: inline void StrToInts(int *pInts, int Num, const char *pStr) { int Index = 0; while(Num) { char aBuf[4] = {0,0,0,0}; for(int c = 0; c < 4 && pStr[Index]; c++, Index++) aBuf[c] = pStr[Index]; *pInts = ((aBuf[0]+128)<<24)|((aBuf[1]+128)<<16)|((aBuf[2]+128)<<8)|(aBuf[3]+128); pInts++; Num--; } // null terminate pInts[-1] &= 0xffffff00; } to convert an string to int[] in my c# client i recieve: int[4] { -14240, -12938, -16988, -8832 } How do I convert the array back to an string? I don't want to use unsafe code (e.g. pointers) Any of my tries resulted in unreadable strings.

    Read the article

  • svn, how to downalod a file from the site 'tigris'

    - by Robin
    hi, sorry for maybe this is stupid question, I just want download source code form the link as follows: http://argouml.tigris.org/source/browse/argouml/trunk/src/ I try to use the command : svn checkout http://argouml.tigris.org/source/browse/argouml/trunk/ but it throw error : svn: OPTIONS of 'http://argouml.tigris.org/source/browse/argouml/trunk': 200 OK (http://argouml.tigris.org) I am strange that , how to checkout the source code from the url ? many thanks for your help ! Regards

    Read the article

  • IE 8 install how to get version 8.0.6001.18702

    - by Robin
    I am having trouble installing the latest version of IE namely version 8.0.6001.18702. I downloaded the install from Microsoft but when the install is completed, the version number is reported as IE 8.0.6001.18702IC. This version does not work on all web applications and I need to get the correct final version installed. The problem is compounded by the installation downloading any updates from the Microsoft site so that there is no real control over the final version you get. Any ideas?

    Read the article

  • mailto: anchor links unloading html5 video in Chrome

    - by Robin Pyon
    I have a very simple page with a <video> tag and an email anchor link: http://jsfiddle.net/6GquX/3/ Clicking the email link in Chrome (OS X 10.8 + Win7, 23.0.1271.97) invokes the beforeunloadchange event and causes the video to unload, which isn't the desired outcome. Curiously enough, if I let the video buffer a bit and then click the email link, the video keeps playing and doesn't unload. To my knowledge this only occurs in Chrome and I'm truly at a loss. Visiting any HTML5 video player site (videojs, flowplayer etc), starting a HTML5 video and then immediately simulating an email click with document.location.href = "mailto:[email protected]" in the dev console yields the same error. However, I'm inclined to think it's the way in which the video has been encoded as I'm unable to recreate the above with a video downloaded from YouTube's HTML5 player: http://jsfiddle.net/6GquX/4/ (source) 1. Is it possible that YouTube are encoding their videos in a particular way to combat this? 2. Are there any strategies / hacks I can employ to get around this?

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Squid server - multiple originservers with different domains

    - by jduncan
    I have 2 squid servers load-balanced with F5 LTMs set up as a reverse proxy. My problem: origin server A hosts domains 1, 2, and 3 origin server B hosts domains 4 and 5. how can I set up squid so that it will cache all vhosts for both servers? my current config: cache_peer serverA parent 80 0 round-robin no-query originserver login=PASS If I add a second line: cache_peer serverB parent 80 0 round-robin no-query originserver login=PASS it only caches domains on serverB, requests for serverA content generate 404 errors. I don't use squid a whole lot, and all help is appreciated. thanks.

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13  | Next Page >