Search Results

Search found 10963 results on 439 pages for 'documentation generation'.

Page 411/439 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Error using `loess.smooth` but not `loess` or `lowess`

    - by Sandy
    I need to smooth some simulated data, but occasionally run into problems when the simulated ordinates to be smoothed are mostly the same value. Here is a small reproducible example of the simplest case. > x <- 0:50 > y <- rep(0,51) > loess.smooth(x,y) Error in simpleLoess(y, x, w, span, degree, FALSE, FALSE, normalize = FALSE, : NA/NaN/Inf in foreign function call (arg 1) loess(y~x), lowess(x,y), and their analogue in MATLAB produce the expected results without error on this example. I am using loess.smooth here because I need the estimates evaluated at a set number of points. According to the documentation, I believe loess.smooth and loess are using the same estimation functions, but the former is an "auxiliary function" to handle the evaluation points. The error seems to come from a C function: > traceback() 3: .C(R_loess_raw, as.double(pseudovalues), as.double(x), as.double(weights), as.double(weights), as.integer(D), as.integer(N), as.double(span), as.integer(degree), as.integer(nonparametric), as.integer(order.drop.sqr), as.integer(sum.drop.sqr), as.double(span * cell), as.character(surf.stat), temp = double(N), parameter = integer(7), a = integer(max.kd), xi = double(max.kd), vert = double(2 * D), vval = double((D + 1) * max.kd), diagonal = double(N), trL = double(1), delta1 = double(1), delta2 = double(1), as.integer(0L)) 2: simpleLoess(y, x, w, span, degree, FALSE, FALSE, normalize = FALSE, "none", "interpolate", control$cell, iterations, control$trace.hat) 1: loess.smooth(x, y) loess also calls simpleLoess, but with what appears to be different arguments. Of course, if you vary enough of the y values to be nonzero, loess.smooth runs without error, but I need the program to run in even the most extreme case. Hopefully, someone can help me with one and/or all of the following: Understand why only loess.smooth, and not the other functions, produces this error and find a solution for this problem. Find a work-around using loess but still evaluating the estimate at a specified number of points that can differ from the vector x. For example, I might want to use only x <- seq(0,50,10) in the smoothing, but evaluate the estimate at x <- 0:50. As far as I know, using predict with a new data frame will not properly handle this situation, but please let me know if I am missing something there. Handle the error in a way that doesn't stop the program from moving onto the next simulated data set. Thanks in advance for any help on this problem.

    Read the article

  • How to design this ?

    - by Akku
    how can i make this entire process as 1 single event??? http://code.google.com/apis/visualization/documentation/dev/dsl_get_started.html and draw the chart on single click? I am new to servlets please guide me When a user clicks the "go " button with some input. The data goes to the servlet say "Test3". The servlet processes the data by the user and generates/feeds the data table dynamically Then I call the html page to draw the chart as shown in the tutorial link above. The problem is when I call the servlet it gives me a long json string in the browser as given in the tutorials "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'1333639331',table:{cols:[{............................" Then when i manually call the html page to draw the chart i am see the chart. But when I call html page directly using the request dispatcher via the servlet I dont get the result. This is my code and o/p...... I need sugession as to how should be my approach to call the chart public class Test3 extends HttpServlet implements DataTableGenerator { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { DataSourceHelper.executeDataSourceServletFlow(request, response, this , isRestrictedAccessMode() ); RequestDispatcher rd; rd = request.getRequestDispatcher("new.html");// it call's the html page which draws the chart as per the data added by the servlet..... rd.include(request, response);//forward(request, response); @Override public Capabilities getCapabilities() { return Capabilities.NONE; } protected boolean isRestrictedAccessMode() { return false; } @Override public DataTable generateDataTable(Query query, HttpServletRequest request) { // Create a data table. DataTable data = new DataTable(); ArrayList<ColumnDescription> cd = new ArrayList<ColumnDescription>(); cd.add(new ColumnDescription("name", ValueType.TEXT, "Animal name")); cd.add......... I get the following result along with unprocessed html page google.visualization.Query.setResponse({version:'0.6',statu..... <html> <head> <title>Getting Started Example</title> .... Entire html page as it is on the Browser. What I need is when a user clicks the go button the servlet should process the data and call the html page to draw the chart....Without the json string appearing on the browser.(all in one user click) What should be my approach or how should i design this.... there are no error in the code. since when i run the servlet i get the json string on the browser and then when i run the html page manually i get the chart drawn. So how can I do (servlet processing + html page drawing chart as final result) at one go without the long json string appearing on the browser. There is no problem with the html code....

    Read the article

  • open layers LineString not working

    - by AlexS
    Sorry to bother you guys, but I'm stuck with his problem for half a day. I want to draw poly line in OpenLayers using LineString object, so I've copied the example from documentation. It runs ok but i can't see the line on the screen Code looks like this <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> var map; var lineLayer ; var points; var style; var polygonFeature function test() { lineLayer = new OpenLayers.Layer.Vector("Line Layer"); style = { strokeColor: '#0000ff', strokeOpacity: 1, strokeWidth: 10 }; map.addLayer(lineLayer); points = new Array(); points[0] =new OpenLayers.LonLat(-2.460181,27.333984 ).transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject());; points[1] = new OpenLayers.LonLat(-3.864255,-22.5 ).transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject());; var linear_ring = new OpenLayers.Geometry.LinearRing(points); polygonFeature = new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Polygon([linear_ring]), null, style); lineLayer.addFeatures([polygonFeature]); alert("1"); } function initialize() { map = new OpenLayers.Map ("map_canvas", { controls:[ new OpenLayers.Control.Navigation(), new OpenLayers.Control.PanZoomBar(), new OpenLayers.Control.LayerSwitcher(), new OpenLayers.Control.Attribution()], maxExtent: new OpenLayers.Bounds(-20037508.34,-20037508.34,20037508.34,20037508.34), maxResolution: 156543.0399, numZoomLevels: 19, units: 'm', projection: new OpenLayers.Projection("EPSG:900913"), displayProjection: new OpenLayers.Projection("EPSG:4326") }); // Define the map layer // Here we use a predefined layer that will be kept up to date with URL changes layerMapnik = new OpenLayers.Layer.OSM.Mapnik("Mapnik"); map.addLayer(layerMapnik); var lonLat = new OpenLayers.LonLat(0, 0).transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject()); map.zoomTo(3); map.setCenter(lonLat, 19); test(); } </head> <body onload="initialize()" > <div id="map_canvas" style="width: 828px; height: 698px"></div> I'm sure I'm missing some parameter in creation of map or something but I really can't figure which one.

    Read the article

  • A Python Wrapper for Shutterfly. Uploading an Image

    - by iJames
    I'm working on a Django app in which I want to order prints through Shutterfly's Open API: http://www.shutterfly.com/documentation/start.sfly So far I've been able to build the appropriate POSTs and GETs using the modules and suggested code snippets including httplib, httplib2, urllib, urllib2, mimetype, etc. But I'm stuck on the image uploading when placing an order (the ordering process is not the same process as uploading images to albums which I haven't tried.) From what I can tell, I'm supposed to basically create the multipart form data by concatenating the HTTP request body together with the binary data of the image. I take the strings: --myuniqueboundary1273149960.175.1 Content-Disposition: form-data; name="AuthenticationID" auniqueauthenticationid --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg and I put this data into it and end with the final boundary: ...\xb5|\xf88\x1dj\t@\xd9\'\x1f\xc6j\x88{\x8a\xc0\x18\x8eGaJG\x03\xe9J-\xd8\x96[\x91T\xc3\x0eTu\xf4\xaa\xa5Ty\x80\x01\x8c\x9f\xe9Z\xad\x8cg\xba# g\x18\xe2\xaa:\x829\x02\xb4["\x17Q\xe7\x801\xea?\xad7j\xfd\xa2\xdf\x81\xd2\x84D\xb6)\xa8\xcb\xc8O\\\x9a\xaf(\x1cqM\x98\x8d*\xb8\'h\xc8+\x8e:u\xaa\xf3*\x9b\x95\x05F8\xedN%\xcb\xe1B2\xa9~Tw\xedF\xc4\xfe\xe8\xfc\xa9\x983\xff\xd9... That ends up making it look like this (when I use print to debug): ... --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg ????q?ExifMM* ? ??(1?2?<??i?b?NIKON CORPORATIONNIKON D40HHQuickTime 7.62009:02:17 13:05:25Mac OS X 10.5.6%??????"?'??0220?????? ???? ? ?|_???,b???50??5 ... --myuniqueboundary1273149960.175.1-- My code for grabbing the binary data is pretty much this: filedata = open('myjpegfile.jpeg','rb').read() Which I then add to the rest of the body. I've see something like this code everywhere. I'm then using this to post the full request (with the headers too): response = urllib2.urlopen(request).read() This seems to me to be the standard way that form POSTS with files happens. Am I missing something here? At some point I might be able to make this into a library worth posting up on github, but this problem has stopped me cold in my tracks. Thanks for any insight!

    Read the article

  • Error at lapack cgesv when matrix is not singular

    - by Jan Malec
    This is my first post. I usually ask classmates for help, but they have a lot of work now and I'm too desperate to figure this out on my own :). I am working on a project for school and I have come to a point where I need to solve a system of linear equations with complex numbers. I have decided to call lapack routine "cgesv" from c++. I use the c++ complex library to work with complex numbers. Problem is, when I call the routine, I get error code "2". From lapack documentation: INFO is INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value > 0: if INFO = i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. Therefore, the element U(2, 2) should be zero, but it is not. This is how I declare the function: void cgesv_( int* N, int* NRHS, std::complex* A, int* lda, int* ipiv, std::complex* B, int* ldb, int* INFO ); This is how I use it: int *IPIV = new int[NA]; int INFO, NRHS = 1; std::complex<double> *aMatrix = new std::complex<double>[NA*NA]; for(int i=0; i<NA; i++){ for(int j=0; j<NA; j++){ aMatrix[j*NA+i] = A[i][j]; } } cgesv_( &NA, &NRHS, aMatrix, &NA, IPIV, B, &NB, &INFO ); And this is how the matrix looks like: (1,-160.85) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-40.213) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-0.000613592) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-40.213) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-160.85) I had to split the matrix colums, otherwise it did not format correctly. My first suspicion was that complex is not parsed correctly, but I have used lapack functions with complex numbers before this way. Any ideas?

    Read the article

  • How can I use functools.partial on multiple methods on an object, and freeze parameters out of order

    - by Joseph Garvin
    I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameter being frozen (think of it as generalizing partial to apply to classes). I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works: >>> def foo(x, y): ... print x, y ... >>> bar = bind(foo, y=3) >>> bar(2) 2 3 But my proxy class does not work, and I'm not sure why: >>> class Foo(object): ... def bar(self, x, y): ... print x, y ... >>> a = Foo() >>> b = PureProxy(a, bar=bind(Foo.bar, y=3)) >>> b.bar(2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: bar() takes exactly 3 arguments (2 given) I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows. from types import MethodType class PureProxy(object): def __init__(self, underlying, **substitutions): self.underlying = underlying for name in substitutions: subst_attr = substitutions[name] if hasattr(subst_attr, "underlying"): setattr(self, name, MethodType(subst_attr, self, PureProxy)) def __getattribute__(self, name): return getattr(object.__getattribute__(self, "underlying"), name) def bind(f, *args, **kwargs): """ Lets you freeze arguments of a function be certain values. Unlike functools.partial, you can freeze arguments by name, which has the bonus of letting you freeze them out of order. args will be treated just like partial, but kwargs will properly take into account if you are specifying a regular argument by name. """ argspec = inspect.getargspec(f) argdict = copy(kwargs) if hasattr(f, "im_func"): f = f.im_func args_idx = 0 for arg in argspec.args: if args_idx >= len(args): break argdict[arg] = args[args_idx] args_idx += 1 num_plugged = args_idx def new_func(*inner_args, **inner_kwargs): args_idx = 0 for arg in argspec.args[num_plugged:]: if arg in argdict: continue if args_idx >= len(inner_args): # We can't raise an error here because some remaining arguments # may have been passed in by keyword. break argdict[arg] = inner_args[args_idx] args_idx += 1 f(**dict(argdict, **inner_kwargs)) new_func.underlying = f return new_func

    Read the article

  • How can I make Swig correctly wrap a char* buffer that is modified in C as a Java Something-or-other

    - by Ukko
    I am trying to wrap some legacy code for use in Java and I was quite happy to see that Swig was able to handle the header file and it generate a great wrapper that almost works. Now I am looking for the deep magic that will make it really work. In C I have a function that looks like this DLL_IMPORT int DustyVoodoo(char *buff, int len, char *curse); This integer returned by this function is an error code in case it fails. The arguments are buff is a character buffer len is the length of the data in the buffer curse the another character buffer that contains the result of calling DustyVoodoo So, you can see where this is going, the result is actually coming back via the third argument. Also len is confusing since it may be the length of both buffers, they are always allocated as being the same size in calling code but given what DustyVoodoo does I don't think that they need be the same. To be safe both buffers should be the same size in practice, say 512 chars. The C code generated for the binding is as follows: SWIGEXPORT jint JNICALL Java_pemapiJNI_DustyVoodoo(JNIEnv *jenv, jclass jcls, jstring jarg1, jint jarg2, jstring jarg3) { jint jresult = 0 ; char *arg1 = (char *) 0 ; int arg2 ; char *arg3 = (char *) 0 ; int result; (void)jenv; (void)jcls; arg1 = 0; if (jarg1) { arg1 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg1, 0); if (!arg1) return 0; } arg2 = (int)jarg2; arg3 = 0; if (jarg3) { arg3 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg3, 0); if (!arg3) return 0; } result = (int)PemnEncrypt(arg1,arg2,arg3); jresult = (jint)result; if (arg1) (*jenv)->ReleaseStringUTFChars(jenv, jarg1, (const char *)arg1); if (arg3) (*jenv)->ReleaseStringUTFChars(jenv, jarg3, (const char *)arg3); return jresult; } It is correct for what it does; however, it misses the fact that cursed is not just an input, it is altered by the function and should be returned as an output. It also does not know that the java Strings are really buffers and should be backed by a suitably sized array. I think that Swig can do the right thing here, I just can't figure out from the documentation how to tell Swig what it needs to know. Any typemap masers in the house?

    Read the article

  • How do I access SourceGear Web services using SOAP::Lite?

    - by user565793
    For some reason SourceGear provide an undocumented Web service on their installations. They actually ask developers to use the API instead because the Web service is kinda messy, but this is a problem in my case because I cannot use this API on a Perl environment, so their solution for my specific case is to use the Web service. This shouldn't be a problem. Using SOAP::Lite I have connected to several Web services in the past in the same way. But the lack of documentation is a major chaos if you don't know where the SOAP calls can be made. I only have an XML to decipher where to and how to make these calls. It would be great if a real SOAP genius could help me out in this. This is an example of the login call and the expected response: Request POST /fortress/dragnetwebservice.asmx HTTP/1.1 Host: velecloudserver Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://www.sourcegear.com/schemas/dragnet/LoginPlainText" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainText xmlns="http://www.sourcegear.com/schemas/dragnet"> <strLogin>string</strLogin> <strPassword>string</strPassword> </LoginPlainText> </soap:Body> </soap:Envelope> Response HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainTextResponse xmlns="http://www.sourcegear.com/schemas/dragnet"> <LoginPlainTextResult>int</LoginPlainTextResult> <strAuthTicket>string</strAuthTicket> </LoginPlainTextResponse> </soap:Body> </soap:Envelope> I'm looking for a way to be able to assemble this somehow. This is my Perl example: my $soap = SOAP::Lite -> uri ('http://velecloudserver/fortress/dragnetwebservice.asmx') -> proxy('http://velecloudserver/fortress/dragnetwebservice.asmx/LoginPlainText'); my $som = $soap->call('LoginPlainText', SOAP::Data->name('LoginPlainText')->value( \SOAP::Data->value([ SOAP::Data->name('strLogin')->value( 'admin' ), SOAP::Data->name('strPassword')->value('Adm1234'), ])) ); Any tip would be appreciated.

    Read the article

  • How do I make a SignalR client something when it receives a message?

    - by Ben
    I want to do something when a client receives a message via a SignalR hub. I've put together a basic chat app, but want people to be able to "like" a chat comment. I'm thinking the way to do this is to find the chat message on the client's page and update it using javascript. In the meantime to "prove the concept" I just want to make an alert popup on the client machine to say another user likes the comment. Trouble is, I'm not sure where to put it. (Am struggling to find SignalR documentation to be honest.) can't get my head round what is calling what here. My ChatHub class is as follows: public class ChatHub : Hub { public void Send(string name, string message) { // Call the broadcastMessage method to update clients. Clients.All.broadcastMessage(name, message); } } And my JavaScript is: $(function () { // Declare a proxy to reference the hub. var chat = $.connection.chatHub; // Create a function that the hub can call to broadcast messages. chat.client.broadcastMessage = function (name, message) { // Html encode display name and message. var encodedName = $('<div />').text(name).html(); var encodedMsg = $('<div />').text(message).html(); // Add the message to the page. var divContent = $('#discussion').html(); $('#discussion').html('<div class="container">' + '<div class="content">' + '<p class="username">' + encodedName + '</p>' + '<p class="message">' + encodedMsg + '</p>' + '</div>' + '<div class="slideout">' + '<div class="clickme" onclick="slideMenu(this)"></div>' + '<div class="slidebutton"><img id="imgid" onclick="likeButtonClick(this)" src="Images/like.png" /></div>' + '<div class="slidebutton"><img onclick="commentButtonClick(this)" src="Images/comment.png" /></div>' + '<div class="slidebutton" style="margin-right:0px"><img onclick="redcardButtonClick(this)" src="Images/redcard.png" /></div>' + '</div>' + '</div>' + divContent); }; // Set initial focus to message input box. $('#message').focus(); // Start the connection. $.connection.hub.start().done(function () { $('#sendmessage').click(function () { // Call the Send method on the hub. chat.server.send($('#lblUser').html(), $('#message').val()); // Clear text box and reset focus for next comment. $('#message').val('').focus(); }); });

    Read the article

  • What's the correct way to use Cakephp urls?

    - by Pichan
    Hello all, it's my first post here :) I'm having some difficulties with dealing with urls and parameters. I've gone through the router class api documentation over and over again and found nothing useful. First of all, I'd like to know if there is any 'universal' format in CakePHP(1.3) for handling urls. I'm currently handling all my urls as simple arrays(in the format that Router::url and $html-link accepts) and it's easy as long as I only need to pass them as arguments to cake's own methods. It usually gets tricky if I need something else. Mainly I'm having problems with converting string urls to the basic array format. Let's say I want to convert $arrayUrl to string and than again into url: $arrayUrl=array('controller'=>'SomeController','action'=>'someAction','someValue'); $url=Router::url($arrayUrl); //$url is now '/path/to/site/someController/someAction/someValue' $url=Router::normalize($url); //remove '/path/to/site' $parsed=Router::parse($url); /*$parsed is now Array( [controller] => someController [action] => someAction [named] => Array() [pass] => Array([0] => someValue) [plugin] => ) */ That seems an awful lot of code to do something as simple as to convert between 2 core formats. Also, note that $parsed is still not in the same as $arrayUrl. Of course I could tweak $parsed manually and actually I've done that a few times as a quick patch but I'd like to get to the bottom of this. I also noticed that when using prefix routing, $this-params in controller has the prefix embedded in the action(i.e. [action] = 'admin_edit') and the result of Router::parse() does not. Both of course have the prefix in it's own key. To summarize, how do I convert an url between any of these 3(or 4, if you include the prefix thing) mentioned formats the right way? Of course it would be easy to hack my way through this, but I'd still like to believe that cake is being developed by a bunch of people who have a lot more experience and insight than me, so I'm guessing there's a good reason for this "perceived misbehavior". I've tried to present my problem as good as I can, but due to my rusty english skills, I had to take a few detours :) I'll explain more if needed.

    Read the article

  • [NSIS] Custom radio-buttom INI page via Eclipse

    - by Omegazero
    I'm using Eclipse's create InstallOptions menu to create a custom INI page with radio-buttons for repackaging the Blackberry Desktop installer. There are 2 sections for each type: "Internet" and "Enterprise". I need a user to select 1 of the 2 options and depending on their selection, the page will carry over the selection chosen in the custom page, jump to the INSTFILES page, and continue onto the end. I couldn't find any concrete documentation on getting INI pages to load in the script (I'm probably searching incorrectly), and then pass data from one page to the next (according to fields I guess?) Any help is appreciated. Even if it's to tell me I'm blind and can't read a doc (though a link would help :) ) Here's the INI code: ; Auto-generated by EclipseNSIS InstallOptions Script Wizard ; Jul 29, 2009 5:42:56 PM [Settings] NumFields=7 Title=RIM BlackBerry Desktop 5.0 installation CancelEnabled=1 [Field 1] Type=RadioButton Left=15 Top=28 Right=100 Bottom=38 Text=Internet State= Flags=NOTIFY [Field 4] Type=RadioButton Left=15 Top=95 Right=100 Bottom=105 Text=Enterprise Flags=NOTIFY [Field 2] Type=GroupBox Left=0 Top=10 Right=300 Bottom=75 Text= [Field 5] Type=Label Left=30 Top=42 Right=235 Bottom=52 Text=For users who are NOT on the Enterprise (Exchange) server [Field 6] Type=Label Left=30 Top=111 Right=235 Bottom=121 Text=Choose this only if you are on the Exchange server [Field 3] Type=GroupBox Left=0 Top=75 Right=300 Bottom=140 [Field 7] Type=Label Left=0 Top=0 Right=130 Bottom=10 Text=Please choose your installation method ...And here's the NSI code: Auto-generated by EclipseNSIS Script Wizard Jul 29, 2009 5:42:16 PM Name "BlackBerry Desktop" RequestExecutionLevel admin General Symbol Definitions !define VERSION 5.0.0.11 !define COMPANY RIM !define URL http://www.blackberry.com MUI Symbol Definitions !define MUI_ICON BBD.ico !define MUI_LICENSEPAGE_RADIOBUTTONS Included files !include Sections.nsh !include MUI2.nsh Reserved Files ReserveFile "${NSISDIR}\Plugins\AdvSplash.dll" Installer pages !insertmacro MUI_PAGE_WELCOME !insertmacro MUI_PAGE_LICENSE license.txt !insertmacro MUI_PAGE_COMPONENTS !insertmacro MUI_PAGE_INSTFILES !insertmacro MUI_PAGE_FINISH Installer languages !insertmacro MUI_LANGUAGE English Installer attributes OutFile RIM_BlackBerry_Desktop_5.0.exe InstallDir "$TEMP\RIM BlackBerry Desktop 5.0 Setup Files" CRCCheck on XPStyle on ShowInstDetails hide VIProductVersion 5.0.0.11 VIAddVersionKey /LANG=${LANG_ENGLISH} ProductName "BlackBerry Desktop" VIAddVersionKey /LANG=${LANG_ENGLISH} ProductVersion "${VERSION}" VIAddVersionKey /LANG=${LANG_ENGLISH} CompanyName "${COMPANY}" VIAddVersionKey /LANG=${LANG_ENGLISH} CompanyWebsite "${URL}" VIAddVersionKey /LANG=${LANG_ENGLISH} FileVersion "${VERSION}" VIAddVersionKey /LANG=${LANG_ENGLISH} FileDescription "" VIAddVersionKey /LANG=${LANG_ENGLISH} LegalCopyright "" Installer sections Section /o Main SEC0000 SetOutPath $INSTDIR SetOverwrite ifdiff ; TESTING PHASE SectionEnd SectionGroup /e "BlackBerry Desktop Section" Section /o Internet SEC0001 SetOutPath $INSTDIR\DRIVERS SetOverwrite ifdiff ; Execwait 'msiexec /i "$INSTDIR\BlackBerry USB and Modem Drivers_ENG (DM5.0b28).msi" /passive' SetOutPath $INSTDIR SetOverwrite ifdiff ; File /r * ; ExecWait '"$INSTDIR\Setup.exe" /S/v/qb!' SectionEnd Section /o Enterprise SEC0002 SetOutPath $INSTDIR\DRIVERS SetOverwrite ifdiff ; Execwait 'msiexec /i "$INSTDIR\BlackBerry USB and Modem Drivers_ENG (DM5.0b28).msi" /passive' SetOutPath $INSTDIR SetOverwrite ifdiff ; File /r * ; Delete /REBOOTOK "$INSTDIR\Setup.ini" ; Rename /REBOOTOK "$INSTDIR\Setup_Enterprise.ini" "$INSTDIR\Setup.ini" ; ExecWait '"$INSTDIR\Setup.exe" /S/v/qb!' SectionEnd SectionGroupEnd Section Descriptions !insertmacro MUI_FUNCTION_DESCRIPTION_BEGIN !insertmacro MUI_DESCRIPTION_TEXT ${SEC0000} $(SEC0000_DESC) !insertmacro MUI_DESCRIPTION_TEXT ${SEC0001} $(SEC0001_DESC) !insertmacro MUI_FUNCTION_DESCRIPTION_END Installer Language Strings TODO Update the Language Strings with the appropriate translations. LangString SEC0000_DESC ${LANG_ENGLISH} "Installation for non-Exchange/Enterprise BlackBerry Users" LangString SEC0001_DESC ${LANG_ENGLISH} "Installation for Exchange/Enterprise BlackBerry Users"

    Read the article

  • Mapping two tables 0..n in Hibernate

    - by simon
    I have a table Users CREATE TABLE "USERS" ( "ID" NUMBER NOT NULL , "LOGINNAME" VARCHAR2 (150) NOT NULL ) and I have a second table SpecialUsers. No UserId can occur twice in the SpecialUsers table, and only a small subset of the ids of users in the Users table are contained in the SpecialUsers table. CREATE TABLE "SPECIALUSERS" ( "USERID" NUMBER NOT NULL, CONSTRAINT "PK_SPECIALUSERS" PRIMARY KEY ("USERID") ) ALTER TABLE "SPECIALUSERS" ADD CONSTRAINT "FK_SPECIALUSERS_USERID" FOREIGN KEY ("USERID") REFERENCES "USERS" ("ID") / Mapping the Users table in Hibernate works ok <hibernate-mapping package="com.initech.domain"> <class name="com.initech.User" table="USERS"> <id name="id" column="ID" type="java.lang.Long"> <meta attribute="use-in-tostring">true</meta> <generator class="sequence"> <param name="sequence">SEQ_USERS_ID</param> </generator> </id> <property name="loginName" column="LOGINNAME" type="java.lang.String" not-null="true"> <meta attribute="use-in-tostring">true</meta> </property> </class> </hibernate-mapping> But I'm struggling in creating the mapping for the SpecialUsers table. All the examples (e.g. in Hibernate documentation) in Internet I found don't have this type of self-reference. I tried a mapping like this: <hibernate-mapping package="com.initech.domain"> <class name="com.initech.User" table="SPECIALUSERS"> <id name="id" column="USERID"> <meta attribute="use-in-tostring">true</meta> <generator class="foreign"> <param name="property">user</param> </generator> </id> <one-to-one name="user" class="User"/> </class> </hibernate-mapping> but got the error Invocation of init method failed; nested exception is org.hibernate.DuplicateMappingException: Duplicate class/entity mapping com.initech.User How should I map the SpecialUsers table? What I need on the application level is a list of the User objects contained in the SpecialUsers table.

    Read the article

  • What is the proper use of boost::fusion::push_back?

    - by Kyle
    // ... snipped includes for iostream and fusion ... namespace fusion = boost::fusion; class Base { protected: int x; public: Base() : x(0) {} void chug() { x++; cout << "I'm a base.. x is now " << x << endl; } }; class Alpha : public Base { public: void chug() { x += 2; cout << "Hi, I'm an alpha, x is now " << x << endl; } }; class Bravo : public Base { public: void chug() { x += 3; cout << "Hello, I'm a bravo; x is now " << x << endl; } }; struct chug { template<typename T> void operator()(T& t) const { t->chug(); } }; int main() { typedef fusion::vector<Base*, Alpha*, Bravo*, Base*> Stuff; Stuff stuff(new Base, new Alpha, new Bravo, new Base); fusion::for_each(stuff, chug()); // Mutates each element in stuff as expected /* Output: I'm a base.. x is now 1 Hi, I'm an alpha, x is now 2 Hello, I'm a bravo; x is now 3 I'm a base.. x is now 1 */ cout << endl; // If I don't put 'const' in front of Stuff... typedef fusion::result_of::push_back<const Stuff, Alpha*>::type NewStuff; // ... then this complains because it wants stuff to be const: NewStuff newStuff = fusion::push_back(stuff, new Alpha); // ... But since stuff is now const, I can no longer mutate its elements :( fusion::for_each(newStuff, chug()); return 0; }; How do I get for_each(newStuff, chug()) to work? (Note: I'm only assuming from the overly brief documentation on boost::fusion that I am supposed to create a new vector every time I call push_back.)

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • cuda/thrust: Trying to sort_by_key 2.8GB of data in 6GB of gpu RAM throws bad_alloc

    - by Sven K
    I have just started using thrust and one of the biggest issues I have so far is that there seems to be no documentation as to how much memory operations require. So I am not sure why the code below is throwing bad_alloc when trying to sort (before the sorting I still have 50% of GPU memory available, and I have 70GB of RAM available on the CPU)--can anyone shed some light on this? #include <thrust/device_vector.h> #include <thrust/sort.h> #include <thrust/random.h> void initialize_data(thrust::device_vector<uint64_t>& data) { thrust::fill(data.begin(), data.end(), 10); } #define BUFFERS 3 int main(void) { size_t N = 120 * 1024 * 1024; char line[256]; try { std::cout << "device_vector" << std::endl; typedef thrust::device_vector<uint64_t> vec64_t; // Each buffer is 900MB vec64_t c[3] = {vec64_t(N), vec64_t(N), vec64_t(N)}; initialize_data(c[0]); initialize_data(c[1]); initialize_data(c[2]); std::cout << "initialize_data finished... Press enter"; std::cin.getline(line, 0); // nvidia-smi reports 48% memory usage at this point (2959MB of // 6143MB) std::cout << "sort_by_key col 0" << std::endl; // throws bad_alloc thrust::sort_by_key(c[0].begin(), c[0].end(), thrust::make_zip_iterator(thrust::make_tuple(c[1].begin(), c[2].begin()))); std::cout << "sort_by_key col 1" << std::endl; thrust::sort_by_key(c[1].begin(), c[1].end(), thrust::make_zip_iterator(thrust::make_tuple(c[0].begin(), c[2].begin()))); } catch(thrust::system_error &e) { std::cerr << "Error: " << e.what() << std::endl; exit(-1); } return 0; }

    Read the article

  • Fibre channel long distance woes

    - by Marki
    I need a fresh pair of eyes. We're using a 15km fibre optic line across which fibrechannel and 10GbE is multiplexed (passive optical CWDM). For FC we have long distance lasers suitable up to 40km (Skylane SFCxx0404F0D). The multiplexer is limited by the SFPs which can do max. 4Gb fibrechannel. The FC switch is a Brocade 5000 series. The respective wavelengths are 1550,1570,1590 and 1610nm for FC and 1530nm for 10GbE. The problem is the 4GbFC fabrics are almost never clean. Sometimes they are for a while even with a lot of traffic on them. Then they may suddenly start producing errors (RX CRC, RX encoding, RX disparity, ...) even with only marginal traffic on them. I am attaching some error and traffic graphs. Errors are currently in the order of 50-100 errors per 5 minutes when with 1Gb/s traffic. Optics Here is the power output of one port summarized (collected using sfpshow on different switches) SITE-A units=uW (microwatt) SITE-B ********************************************** FAB1 SW1 TX 1234.3 RX 49.1 SW3 1550nm (ko) RX 95.2 TX 1175.6 FAB2 SW2 TX 1422.0 RX 104.6 SW4 1610nm (ok) RX 54.3 TX 1468.4 What I find curious at this point is the asymmetry in the power levels. While SW2 transmits with 1422uW which SW4 receives with 104uW, SW2 only receives the SW4 signal with similar original power only with 54uW. Vice versa for SW1-3. Anyway the SFPs have RX sensitivity down to -18dBm (ca. 20uW) so in any case it should be fine... But nothing is. Some SFPs have been diagnosed as malfunctioning by the manufacturer (the 1550nm ones shown above with "ko"). The 1610nm ones apparently are ok, they have been tested using a traffic generator. The leased line has also been tested more than once. All is within tolerances. I'm awaiting the replacements but for some reason I don't believe it will make things better as the apparently good ones don't produce ZERO errors either. Earlier there was active equipment involved (some kind of 4GFC retimer) before putting the signal on the line. No idea why. That equipment was eliminated because of the problems so we now only have: the long distance laser in the switch, (new) 10m LC-SC monomode cable to the mux (for each fabric), the leased line, the same thing but reversed on the other side of the link. FC switches Here is a port config from the Brocade portcfgshow (it's like that on both sides, obviously) Area Number: 0 Speed Level: 4G Fill Word(On Active) 0(Idle-Idle) Fill Word(Current) 0(Idle-Idle) AL_PA Offset 13: OFF Trunk Port ON Long Distance LS VC Link Init OFF Desired Distance 32 Km Reserved Buffers 70 Locked L_Port OFF Locked G_Port OFF Disabled E_Port OFF Locked E_Port OFF ISL R_RDY Mode OFF RSCN Suppressed OFF Persistent Disable OFF LOS TOV enable OFF NPIV capability ON QOS E_Port OFF Port Auto Disable: OFF Rate Limit OFF EX Port OFF Mirror Port OFF Credit Recovery ON F_Port Buffers OFF Fault Delay: 0(R_A_TOV) NPIV PP Limit: 126 CSCTL mode: OFF Forcing the links to 2GbFC produces no errors, but we bought 4GbFC and we want 4GbFC. I don't know where to look anymore. Any ideas what to try next or how to proceed? If we can't make 4GbFC work reliably I wonder what the people working with 8 or 16 do... I don't assume that "a few errors here and there" are acceptable. Oh and BTW we are in contact with everyone of the manufacturers (FC switch, MUX, SFPs, ...) Except for the SFPs to be changed (some have been changed before) nobody has a clue. Brocade SAN Health says the fabric is ok. MUX, well, it's passive, it's only a prism, nature at it's best. Any shots in the dark? APPENDIX: Answers to your questions @Chopper3: This is the second generation of Brocades exhibiting the problem. Before we had 5000s, now we have 5100s. In the beginning when we still had the active MUX we rented a longdistance laser once to put it into the switch directly in order to make tests for a day, during that day of course it was clean. But as I said, sometimes it's clean just like that. And sometimes it's not. Alternative switches would mean to rebuild the entire SAN with those only to test. Alternative SFPs, well they're hard to come by just like that. @longneck: The line is rented. It's a dark fibre (9um monomode) so there's noone else on it. Sure there are splices. I can't go and look but I have to trust they have been done correctly. As I said the line has been checked and rechecked (using an optical time-domain reflectometer). Obviously you don't have all this equipment yourself because it's way too expensive. @mdpc: What would be the "wrong" type of cable according to you? Up to the switch everything is monomode, yes. The connectors are the correct ones too. Yeah I know there are the green ones where the fibre is cut off at a certain angle etc. But we have the correct ones for all that I know. Progress Report #1 We have had two fabrics (=2x2 switches) with Brocade 5100s with FabricOS 6.4.1 and two fabrics (another 2x4 switches) on FabricOS 7.0.2. On the longdistance ISLs (one in each fabric) it turned out that with FOS 6.4.1 setting it to long distance issues warnings about the VC Init setting and consequently the fill word. But those are only warnings. FOS 7.0.2 requires you to do modifications to VCI and the fillword for long distance links. Setting FOS 6.4.1 to the LS (long-distance static distance) setting with wrong VCI and fillword setting made the whole fabric inoperational (stuck in an SCN loop, use fabriclog -s to see, you don't see it anywhere else, no port error counters or anything increasing). Currently I'm giving the one fabric with the IMHO more correct settings a beating and it seems to do fine, whereas the other one without much traffic still has errors here and there. In short: We have eliminated the active part of the MUX (the FC retimer). We are putting the long distance SFPs into the end equipment themselves. Just to be sure we bought new monomode cables to connect the end equipment to the remaining passive part of the MUX. We are now trying out several long distance configs. It's almost black magic. Everything that happens is mostly empirical, noone seems to have a clue what are the exact reasons to do something. ("We have tried this, and it didn't work, then we tried that and it worked, so we stuck with that." But noone really seems to know why.) I'll keep you updated. Progress Report #2 We got the new lasers for one of the fabrics on warranty. It's ultra clean even on 4GbFC. They're transmitting with roughly 2mW (3dBm) whereas the others are only at 1.5mW (1.5dBm) although that should really be enough. The other fabric (where the lasers are apparently ok) still produces one or two CRCs infrequently. Using sfpshow the SFP producing the actual RX errors shows Status/Ctrl: 0x82 Alarm flags[0,1] = 0x5, 0x40 Warn Flags[0,1] = 0x5, 0x40 Now I'll have to find out what that means. Not sure if it was there before. Well I'll first clear my head with a week of vacation. 8-)

    Read the article

  • Troubleshooting Multiple Endpoints Problem in WCF

    - by omatase
    I have been using WCF for a few years now and am fairly comfortable with it, however there is one simple WCF concept that I have yet to employ and am having difficulties with it. Following this article about WCF addressing as it specifically relates to multiple endpoints in IIS I see these two excerpts: "Suppose you have a file named calc.svc and you place it in a virtual directory that corresponds to (http://localhost:8080/calcservice). The base address for this service will be (http://localhost:8080/calcservice/calc.svc)." and "Now, consider the endpoint configuration found in the virtual directory’s web.config file (in Figure 3). In this case, the address of the first endpoint becomes the same as the base address (http://localhost:8080/calcservice/calc.svc) since I left the endpoint address empty. The address of the second endpoint becomes the combination of the base address appended with "secure", like this: (http://localhost:8080/calcservice/calc.svc/secure)." Now in my application I'm trying to create two endpoints for the same service (shown below). The service name is MainService.svc. For endpoint one I have address="" and endpoint two has address="Soap11". Bringing the site up in IIS I can successfully hit this URL: (https://localhost:444/MainService.svc). This is the base address for the service according to all the documentation I can find. According to this article and others I have seen that confirm its information I should have the second endpoint at (https://localhost:444/MainService.svc/Soap11) but if I navigate to that URL I get a .Net exception indicating the resource is not found. Is there a tool I can use to see where my different endpoints will be available? Maybe some IIS or aspnet_isapi.dll logging I can turn on? My web.config section defining my endpoints follows. Thanks in advance for your help <service behaviorConfiguration="MyService.MainServiceBehavior" name="MyService.MainService"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="WSBindingConfig" contract="MyService.IMainService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="Soap11" binding="basicHttpBinding" bindingConfiguration="BasicBindingWithCredentials" contract="MyService.IMainService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service>

    Read the article

  • How to split xml to header and items using smooks?

    - by palto
    I have a xml file roughly like this: <batch> <header> <headerStuff /> </header> <contents> <timestamp /> <invoices> <invoice> <invoiceStuff /> </invoice> <!-- Insert 1000 invoice elements here --> </invoices> </contents> </batch> I would like to split that file to 1000 files with the same headerStuff and only one invoice. Smooks documentation is very proud of the possibilities of transformations, but unfortunately I don't want to do those. The only way I've figured how to do this is to repeat the whole structure in freemarker. But that feels like repeating the structure unnecessarily. The header has like 30 different tags so there would be lots of work involved also. What I currently have is this: <?xml version="1.0" encoding="UTF-8"?> <smooks-resource-list xmlns="http://www.milyn.org/xsd/smooks-1.1.xsd" xmlns:calc="http://www.milyn.org/xsd/smooks/calc-1.1.xsd" xmlns:frag="http://www.milyn.org/xsd/smooks/fragment-routing-1.2.xsd" xmlns:file="http://www.milyn.org/xsd/smooks/file-routing-1.1.xsd"> <params> <param name="stream.filter.type">SAX</param> </params> <frag:serialize fragment="INVOICE" bindTo="invoiceBean" /> <calc:counter countOnElement="INVOICE" beanId="split_calc" start="1" /> <file:outputStream openOnElement="INVOICE" resourceName="invoiceSplitStream"> <file:fileNamePattern>invoice-${split_calc}.xml</file:fileNamePattern> <file:destinationDirectoryPattern>target/invoices</file:destinationDirectoryPattern> <file:highWaterMark mark="10"/> </file:outputStream> <resource-config selector="INVOICE"> <resource>org.milyn.routing.io.OutputStreamRouter</resource> <param name="beanId">invoiceBean</param> <param name="resourceName">invoiceSplitStream</param> <param name="visitAfter">true</param> </resource-config> </smooks-resource-list> That creates files for each invoice tag, but I don't know how to continue from there to get the header also in the file. EDIT: The solution has to use Smooks. We use it in an application as a generic splitter and just create different smooks configuration files for different types of input files.

    Read the article

  • Direct show video renderers suck?

    - by Daniel
    So I've been looking into the world of media playback for windows and I've started making a C# Media Player using DirectShow. I started off using the VRM-7 windowed video renderer and it was brilliant except it had a couple of small problems (multi monitors, fullscreen). But after some research I found that it's deprecated and I should be using VRM9. So I changed it to use VRM9 windowless then found out that was an old post rofl _< so finally I'm using Vista/Win7 (or XP .net 3) Enhanced Video Renderer (EVR) which is apparently the most up to date Microsoft video renderer and has all the flashy performance/quality things added to it. (tbh I haven't noticed any difference but maybe I need a blue-ray or HQ video to notice it). With using EVR everything is working fine except resizing the video. Its really laggy/choppy/teary and problem something to do with its frame queueing mechanism. To demonstrate my problem open up windows media player classic. View - Options - Playback - output Chose the "EVR" DirectShow Video renderer Now restart wmp class and play a video, while it's playing click and drag a corner to resize it. You'll notice its horribly laggy. This is the exact same problem i am having. But if you chose "EVR Custom Pres. *" or EVR Sync *" resizing works beautifully! So i tried googling around for anything about EVR resizing issues and how to fix it but i couldn't believe how little i could find. I'm guessing "Custom Pres." stands for "Custom Presenter" which sounds like they made their own. Also you'll notice on the right hand size when you swap between EVR and the other EVR's the Resizer drop down on the right greys out. So basically I wan't to know how I can fix this retarded resizing problem and is there any decent documentation out there? There is a fair bit for VMR7/9 but not much for EVR. I downloaded the DirectX SDK which apparently has samples but it was a waste of 500mb of bandwidth as it had nothing relevant. Perhaps there is some way to force it not queueing up frames if that is the problem? If you want code say the word and I'll paste some in. But it's really quite simple and nothing much happens, i'm convinced it's a problem with the EVR renderer. EDIT: Oh and one other thing, what does VLC use? If you go into vlc options and change the renderer to anything but default, they all suck. So is it using VMR7? Or its own?

    Read the article

  • Need help with joins in sqlalchemy

    - by Steve
    I'm new to Python, as well as SQL Alchemy, but not the underlying development and database concepts. I know what I want to do and how I'd do it manually, but I'm trying to learn how an ORM works. I have two tables, Images and Keywords. The Images table contains an id column that is its primary key, as well as some other metadata. The Keywords table contains only an id column (foreign key to Images) and a keyword column. I'm trying to properly declare this relationship using the declarative syntax, which I think I've done correctly. Base = declarative_base() class Keyword(Base): __tablename__ = 'Keywords' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, ForeignKey('Images.id', ondelete='CASCADE'), primary_key=True) keyword = Column(String(32), primary_key=True) class Image(Base): __tablename__ = 'Images' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(256), nullable=False) keywords = relationship(Keyword, backref='image') This represents a many-to-many relationship. One image can have many keywords, and one keyword can relate back to many images. I want to do a keyword search of my images. I've tried the following with no luck. Conceptually this would've been nice, but I understand why it doesn't work. image = session.query(Image).filter(Image.keywords.contains('boy')) I keep getting errors about no foreign key relationship, which seems clearly defined to me. I saw something about making sure I get the right 'join', and I'm using 'from sqlalchemy.orm import join', but still no luck. image = session.query(Image).select_from(join(Image, Keyword)).\ filter(Keyword.keyword == 'boy') I added the specific join clause to the query to help it along, though as I understand it, I shouldn't have to do this. image = session.query(Image).select_from(join(Image, Keyword, Image.id==Keyword.id)).filter(Keyword.keyword == 'boy') So finally I switched tactics and tried querying the keywords and then using the backreference. However, when I try to use the '.images' iterating over the result, I get an error that the 'image' property doesn't exist, even though I did declare it as a backref. result = session.query(Keyword).filter(Keyword.keyword == 'boy').all() I want to be able to query a unique set of image matches on a set of keywords. I just can't guess my way to the syntax, and I've spent days reading the SQL Alchemy documentation trying to piece this out myself. I would very much appreciate anyone who can point out what I'm missing.

    Read the article

  • WTF is wtf? (in WebKit code base)

    - by Motti
    I downloaded Chromium's code base and ran across the WTF namespace. namespace WTF { /* * C++'s idea of a reinterpret_cast lacks sufficient cojones. */ template<typename TO, typename FROM> TO bitwise_cast(FROM in) { COMPILE_ASSERT(sizeof(TO) == sizeof(FROM), WTF_wtf_reinterpret_cast_sizeof_types_is_equal); union { FROM from; TO to; } u; u.from = in; return u.to; } } // namespace WTF Does this mean what I think it means? Could be so, the bitwise_cast implementation specified here will not compile if either TO or FROM is not a POD and is not (AFAIK) more powerful than C++ built in reinterpret_cast. The only point of light I see here is the nobody seems to be using bitwise_cast in the Chromium project. I see there's some legalese so I'll put in the little letters to keep out of trouble. /* * Copyright (C) 2008 Apple Inc. All Rights Reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */

    Read the article

  • How Can I Populate Default Form Data with a ManyToMany Field?

    - by b14ck
    Ok, I've been crawling google and Django documentation for over 2 hours now (as well as the IRC channel on freenode), and haven't been able to figure this one out. Basically, I have a model called Room, which is displayed below: class Room(models.Model): """ A `Partyline` room. Rooms on the `Partyline`s are like mini-chatrooms. Each room has a variable amount of `Caller`s, and usually a moderator of some sort. Each `Partyline` has many rooms, and it is common for `Caller`s to join multiple rooms over the duration of their call. """ LIVE = 0 PRIVATE = 1 ONE_ON_ONE = 2 UNCENSORED = 3 BULLETIN_BOARD = 4 CHILL = 5 PHONE_BOOTH = 6 TYPE_CHOICES = ( ('LR', 'Live Room'), ('PR', 'Private Room'), ('UR', 'Uncensored Room'), ) type = models.CharField('Room Type', max_length=2, choices=TYPE_CHOICES) number = models.IntegerField('Room Number') partyline = models.ForeignKey(Partyline) owner = models.ForeignKey(User, blank=True, null=True) bans = models.ManyToManyField(Caller, blank=True, null=True) def __unicode__(self): return "%s - %s %d" % (self.partyline.name, self.type, self.number) I've also got a forms.py which has the following ModelForm to represent my Room model: from django.forms import ModelForm from partyline_portal.rooms.models import Room class RoomForm(ModelForm): class Meta: model = Room I'm creating a view which allows administrators to edit a given Room object. Here's my view (so far): def edit_room(request, id=None): """ Edit various attributes of a specific `Room`. Room owners do not have access to this page. They cannot edit the attributes of the `Room`(s) that they control. """ room = get_object_or_404(Room, id=id) if not room.is_owner(request.user): return HttpResponseForbidden('Forbidden.') if is_user_type(request.user, ['admin']): form_type = RoomForm elif is_user_type(request.user, ['lm']): form_type = LineManagerEditRoomForm elif is_user_type(request.user, ['lo']): form_type = LineOwnerEditRoomForm if request.method == 'POST': form = form_type(request.POST, instance=room) if form.is_valid(): if 'owner' in form.cleaned_data: room.owner = form.cleaned_data['owner'] room.save() else: defaults = {'type': room.type, 'number': room.number, 'partyline': room.partyline.id} if room.owner: defaults['owner'] = room.owner.id if room.bans: defaults['bans'] = room.bans.all() ### this does not work properly! form = form_type(defaults, instance=room) variables = RequestContext(request, {'form': form, 'room': room}) return render_to_response('portal/rooms/edit.html', variables) Now, this view works fine when I view the page. It shows all of the form attributes, and all of the default values are filled in (when users do a GET)... EXCEPT for the default values for the ManyToMany field 'bans'. Basically, if an admins clicks on a Room object to edit, the page they go to will show all of the Rooms default values except for the 'bans'. No matter what I do, I can't find a way to get Django to display the currently 'banned users' for the Room object. Here is the line of code that needs to be changed (from the view): defaults = {'type': room.type, 'number': room.number, 'partyline': room.partyline.id} if room.owner: defaults['owner'] = room.owner.id if room.bans: defaults['bans'] = room.bans.all() ### this does not work properly! There must be some other syntax I have to use to specify the default value for the 'bans' field. I've really been pulling my hair out on this one, and would definitely appreciate some help. Thanks!

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • Applying SoapIgnore attribute doesn't take any effect to serialization result

    - by the_V
    I'm trying to figure out .NET serialization stuff and experiencing a problem. I've made a simple program to test it and got stuck with using attributes. Here's the code: [Serializable] public class SampleClass { [SoapIgnore] public Guid InstanceId { get; set; } } class Program { static void Main() { SampleClass cl = new SampleClass { InstanceId = Guid.NewGuid() }; SoapFormatter fm = new SoapFormatter(); using (FileStream stream = new FileStream(string.Format("C:\\Temp\\{0}.inv", Guid.NewGuid().ToString().Replace("-", "")), FileMode.Create)) { fm.Serialize(stream, cl); } } } The problem is that InstanceId is not ignored while serialization is done. What I get in .inv file is something like this: <SOAP-ENV:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:clr="http://schemas.microsoft.com/soap/encoding/clr/1.0" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body> <a1:SampleClass id="ref-1" xmlns:a1="http://schemas.microsoft.com/clr/nsassem/TestConsoleApp/TestConsoleApp%2C%20Version%3D1.0.0.0%2C%20Culture%3Dneutral%2C%20PublicKeyToken%3Dnull"> <_x003C_InstanceId_x003E_k__BackingField> <_a>769807168</_a> <_b>27055</_b> <_c>16408</_c> <_d>141</_d> <_e>210</_e> <_f>171</_f> <_g>30</_g> <_h>252</_h> <_i>196</_i> <_j>246</_j> <_k>159</_k> </_x003C_InstanceId_x003E_k__BackingField> </a1:SampleClass> </SOAP-ENV:Body> </SOAP-ENV:Envelope> As I can understand from documentation InstanceId property is not supposed to be serialized since SoapIgnore attribute is applied to it. Am I missing something?

    Read the article

  • DirectShow EVR resizing window problem

    - by Daniel
    So I've been looking into the world of media playback for windows and I've started making a C# Media Player using DirectShow. I started off using the VRM-7 windowed video renderer and it was brilliant except it had a couple of small problems (multi monitors, fullscreen). But after some research I found that it's deprecated and I should be using VRM9. So I changed it to use VRM9 windowless then found out that was an old post rofl _< so finally I'm using Vista/Win7 (or XP .net 3) Enhanced Video Renderer (EVR) which is apparently the most up to date Microsoft video renderer and has all the flashy performance/quality things added to it. (tbh I haven't noticed any difference but maybe I need a blue-ray or HQ video to notice it). With using EVR everything is working fine except resizing the video. Its really laggy/choppy/teary and probably something to do with its frame queueing mechanism. To demonstrate my problem open up windows media player classic. View - Options - Playback - output Chose the "EVR" DirectShow Video renderer Now restart wmp class and play a video, while it's playing click and drag a corner to resize it. You'll notice its horribly laggy. This is the exact same problem i am having. But if you chose "EVR Custom Pres. *" or EVR Sync *" resizing works beautifully! So i tried googling around for anything about EVR resizing issues and how to fix it but i couldn't believe how little i could find. I'm guessing "Custom Pres." stands for "Custom Presenter" which sounds like they made their own. Also you'll notice on the right hand size when you swap between EVR and the other EVR's the Resizer drop down on the right greys out. So basically I wan't to know how I can fix this retarded resizing problem and is there any decent documentation out there? There is a fair bit for VMR7/9 but not much for EVR. I downloaded the DirectX SDK which apparently has samples but it was a waste of 500mb of bandwidth as it had nothing relevant. Perhaps there is some way to force it not queueing up frames if that is the problem? If you want code say the word and I'll paste some in. But it's really quite simple and nothing much happens, i'm convinced it's a problem with the EVR renderer. EDIT: Oh and one other thing, what does VLC use? If you go into vlc options and change the renderer to anything but default, they all suck. So is it using VMR7? Or its own?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >