Search Results

Search found 1261 results on 51 pages for 'trivial'.

Page 37/51 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • return value (not a reference) from the function, bound to a const reference in the calling function

    - by brainydexter
    "If you return a value (not a reference) from the function, then bind it to a const reference in the calling function, its lifetime would be extended to the scope of the calling function." So: const BoundingBox Player::GetBoundingBox(void) { return BoundingBox( &GetBoundingSphere() ); } Returns a value of type const BoundingBox from function GetBoundingBox() Called function: (From within function Update() the following is called:) variant I: (Bind it to a const reference) const BoundingBox& l_Bbox = l_pPlayer->GetBoundingBox(); variant II: (Bind it to a const copy) const BoundingBox l_Bbox = l_pPlayer->GetBoundingBox(); Both work fine and I don't see the l_Bbox object going out of scope. (Though, I understand in variant one, the copy constructor is not called and thus is slightly better than variant II). Also, for comparison, I made the following changes. BoundingBox Player::GetBoundingBox(void) { return BoundingBox( &GetBoundingSphere() ); } with Variants: I BoundingBox& l_Bbox = l_pPlayer->GetBoundingBox(); and II: BoundingBox l_Bbox = l_pPlayer->GetBoundingBox(); The objet l_Bbox still does not out scope. So, I don't see how "bind it to a const reference in the calling function, its lifetime would be extended to the scope of the calling function", really extends the lifetime of the object to the scope of the calling function ? Am I missing something trivial here..please explain .. Thanks a lot

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • Treetop basic parsing and regular expression usage

    - by ucint
    I'm developing a script using the ruby Treetop library and having issues working with its syntax for regex's. First off, many regular expressions that work in other settings dont work the same in treetop. This is my grammar: (myline.treetop) grammar MyLine rule line string whitespace condition end rule string [\S]* end rule whitespace [\s]* end rule condition "new" / "old" / "used" end end This is my usage: (usage.rb) require 'rubygems' require 'treetop' require 'polyglot' require 'myline' parser = MyLineParser.new p parser.parse("randomstring new") This should find the word new for sure and it does! Now I wont to extend it so that it can find new if the input string becomes "randomstring anotherstring new yetanother andanother" and possibly have any number of strings followed by whitespace (tab included) before and after the regex for rule condition. In other words, if I pass it any sentence with the word "new" etc in it, it should be able to match it. So let's say I change my grammar to: rule line string whitespace condition whitespace string end Then, it should be able to find a match for: p parser.parse("randomstring new anotherstring") So, what do I have to do to allow the string whitespace to be repeated before and after condition? If I try to write this: rule line (string whitespace)* condition (whitespace string)* end , it goes in an infinite loop. If i replace the above () with [], it returns nil In general, regex's return a match when i use the above, but treetop regex's dont. Does anyone have any tips/points on how to go about this? Plus, since there isn't much documentation for treetop and the examples are either too trivial or too complex, is there anyone who knows a more thorough documentation/guide for treetop?

    Read the article

  • Time complexity with bit cost

    - by Keyser
    I think I might have completely misunderstood bit cost analysis. I'm trying to wrap my head around the concept of studying an algorithm's time complexity with respect to bit cost (instead of unit cost) and it seems to be impossible to find anything on the subject. Is this considered to be so trivial that no one ever needs to have it explained to them? Well I do. (Also, there doesn't even seem to be anything on wikipedia which is very unusual). Here's what I have so far: The bit cost of multiplication and division of two numbers with n bits is O(n^2) (in general?) So, for example: int number = 2; for(int i = 0; i < n; i++ ){ number = i*i; } has a time complexity with respect to bit cost of O(n^3), because it does n multiplications (right?) But in a regular scenario we want the time complexity with respect to the input. So, how does that scenario work? The number of bits in i could be considered a constant. Which would make the time complexity the same as with unit cost except with a bigger constant (and both would be linear). Also, I'm guessing addition and subtraction can be done in constant time, O(1). Couldn't find any info on it but it seems reasonable since it's one assembler operation.

    Read the article

  • Objective-C function dispatch collisions; Or, how to achieve "namespaces"?

    - by fbrereto
    I have an application for Mac OS X that supports plugins that are intended to be loaded at the same time. Some of these plugins are built on top of a Cocoa framework that may receive updates in one plugin but not another. Given Objective-C's current method for function dispatching, any call from any plugin to a given Objective-C routine will go to the same routine every time. That means plugin A can find itself inside plugin B with a trivial Objective-C call! Obviously what we're looking for is for each plugin to interact with its own version of the framework upon which it was built. I have been reading some on Objective-C and this particular need, but haven't found a definitive solution for it yet. Update: My use of the word "framework" above is misleading: the framework is a statically-linked library, built into the plugin(s) that need it. The way Objective-C handles dispatching, however, even these statically linked pieces of disparate code will co-mingle in the Objective-C dispatcher, leading to unintended consequences. Update 2: I'm still a bit fuzzy on the answer provided here, as it doesn't seem to propose a solution as much as an unproven hypothesis.

    Read the article

  • C++ stack for multiple data types (RPN vector calculator)

    - by Arrieta
    Hello: I have designed a quick and basic vector arithmetic library in C++. I call the program from the command line when I need a rapid cross product, or angle between vectors. I don't use Matlab or Octave or related, because the startup time is larger than the computation time. Again, this is for very basic operations. I am extending this program, and I will make it work as an RPN calculator, for operations of the type: 1 2 3 4 5 6 x out: -3 6 -3 (give one vector, another vector, and the "cross" operator; spit out the cross product) The stack must accept 3d vectors or scalars, for operations like: 1 2 3 2 * out: 2 4 6 The lexer and parser for this mini-calculator are trivial, but I cannot seem to think of a good way for creating the internal stack. How would you create a stack of for containing vectors or doubles (I rolled up my own very simple vector class - less than one hundred lines and it does everything I need). How can I create a simple stack which accepts elements of class Vector or type double? Thank you.

    Read the article

  • Bioperl, equivalent of IO::ScalarArray for array of Seq objects?

    - by Ryan Thompson
    In perl, we have IO::ScalarArray for treating the elements of an array like the lines of a file. In BioPerl, we have Bio::SeqIO, which can produce a filehandle that reads and writes Bio::Seq objects instead of strings representing lines of text. I would like to do a combination of the two: I would like to obtain a handle that reads successive Bio::Seq objects from an array of such objects. Is there any way to do this? Would it be trivial for me to implement a module that does this? My reason for wanting this is that I would like to be able to write a subroutine that accepts either a Bio::SeqIO handle or an array of Bio::Seq objects, and I'd like to avoid writing separate loops based on what kind of input I get. Perhaps the following would be better than writing my own IO module? sub process_sequences { my $input = $_[0]; # read either from array of Bio::Seq or from Bio::SeqIO my $nextseq; if (ref $input eq 'ARRAY') { my $pos = 0 $nextseq = sub { return $input->[$pos++] if $pos < @$input}; } } else { $nextseq = sub { $input->getline(); } } while (my $seq = $nextseq->()) { do_cool_stuff_with($seq) } }

    Read the article

  • Lift XML Parsing Error

    - by bstevens90
    I know there are other questions on this and I have read through almost all of them and none of them solved my problem. I have inside a home directory: def search(in: NodeSeq) : NodeSeq = { bind("work", in, "docId" -> text("", did = _), "visitId" -> text("", vid = _), "provider" -> text("", prov = _), "emCode" -> text(ecode, ecode = _)) } along with: <lift:home.searchForm form="POST" multipart="true" > <table> <tr> <td>DocId</td> <td>VisitId</td> <td>Provider</td> <td>EanMCode</td> </tr> <tr> <td><work:docId /></td> <td><work:visitId /></td> <td><work:provider /></td> <td><work:emCode /></td> <td><button>Click Me!</button></td> </tr> </table> </lift:home.searchForm> Inside an html page. I have included xmlns:lift="http://liftweb.net/" in default.... I can't find anyway to fix this... I am getting XML Parsing Error: prefix not bound to a namespace Location: http://localhost:8080/ Line Number 29, Column 10: <td><work:docId></work:docId></td> in firefox. I have written similar code and had it working in another app and just cant even find anything im doing different thats not trivial naming... Thanks in advance!

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • is there an equivalent of a trigger for general stored procedure execution on sql server

    - by Arj
    Hi All, Hope you can help. Is there a way to detect when a stored proc is being run on SQL Server without altering the SP itself? Here's the requirement. We need to track users running reports from our enterprise data warehouse as the core product we use doesn't allow for this. Both core product reports and a slew of in-house ones we've added all return their data from individual stored procs. We don't have a practical way of altering the parts of the product webpages where reports are called from. We also can't change the stored procs for the core product reports. (It would be trivial to add a logging line to the start/end of each of our inhouse ones). What I'm trying to find therefore, is whether there's a way in SQL Server (2005 / 2008) to execute a logging stored proc whenever any other stored procedure runs, without altering those stored procedures themselves. We have general control over the SQL Server instance itself as it's local, we just don't want to change the product stored procs themselves. Any one have any ideas? Is there a kind of "stored proc executing trigger"? Is there an event model for SQL Server that we can hook custom .Net code into? (Just to discount it from the start, we want to try and make a change to SQL Server rather than get into capturing the report being run from the products webpages etc) Thoughts appreciated Thanks

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Best way to do interprocess communication on Mac OS X

    - by jbrennan
    I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices). The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly). I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered: Reading and writing to a file (…yes), very basic but not very scalable. Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa Distributed Objects: seems rather inelegant for a task like this NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.

    Read the article

  • Why doesn't gcc remove this check of a non-volatile variable?

    - by Thomas
    This question is mostly academic. I ask out of curiosity, not because this poses an actual problem for me. Consider the following incorrect C program. #include <signal.h> #include <stdio.h> static int running = 1; void handler(int u) { running = 0; } int main() { signal(SIGTERM, handler); while (running) ; printf("Bye!\n"); return 0; } This program is incorrect because the handler interrupts the program flow, so running can be modified at any time and should therefore be declared volatile. But let's say the programmer forgot that. gcc 4.3.3, with the -O3 flag, compiles the loop body (after one initial check of the running flag) down to the infinite loop .L7: jmp .L7 which was to be expected. Now we put something trivial inside the while loop, like: while (running) putchar('.'); And suddenly, gcc does not optimize the loop condition anymore! The loop body's assembly now looks like this (again at -O3): .L7: movq stdout(%rip), %rsi movl $46, %edi call _IO_putc movl running(%rip), %eax testl %eax, %eax jne .L7 We see that running is re-loaded from memory each time through the loop; it is not even cached in a register. Apparently gcc now thinks that the value of running could have changed. So why does gcc suddenly decide that it needs to re-check the value of running in this case?

    Read the article

  • Javascript Mouseover bubbling from children

    - by Nicky De Maeyer
    Ive got the following html setup: <div id="div1"> <div id="content1">blaat</div> <div id="content1">blaat2</div> </div> it is styled so you can NOT hover div1 without hovering one of the other 2 divs. Now i've got a mouseout on div1. The problem is that my div1.mouseout gets triggered when i move from content1 to content2, because their mouseouts are bubbling. and the event's target, currentTarget or relatedTarget properties are never div1, since it is never hovered directly... I've been searching mad for this, but I can only find articles and solutions for problems who are the reverse of what I need. It seems trivial but I can't get it to work... The mouseout of div1 should ONLY get triggered when the mouse leaves div1. One of the possibilities would be to set some data on mouse enter and mouseleave, but I'm convinced this should work out of the box, since it is just a mouseout... EDIT: bar.mouseleave(function(e) { if ($(e.currentTarget).attr('id') == bar.attr('id')) { bar.css('top', '-'+contentOuterHeight+'px'); $('#floatable-bar #floatable-bar-tabs span').removeClass('active'); } }); changed the mouseout to mouseleave and the code worked...

    Read the article

  • Calculating confidence intervals for a non-normal distribution

    - by Josiah
    Hi all, First, I should specify that my knowledge of statistics is fairly limited, so please forgive me if my question seems trivial or perhaps doesn't even make sense. I have data that doesn't appear to be normally distributed. Typically, when I plot confidence intervals, I would use the mean +- 2 standard deviations, but I don't think that is acceptible for a non-uniform distribution. My sample size is currently set to 1000 samples, which would seem like enough to determine if it was a normal distribution or not. I use Matlab for all my processing, so are there any functions in Matlab that would make it easy to calculate the confidence intervals (say 95%)? I know there are the 'quantile' and 'prctile' functions, but I'm not sure if that's what I need to use. The function 'mle' also returns confidence intervals for normally distributed data, although you can also supply your own pdf. Could I use ksdensity to create a pdf for my data, then feed that pdf into the mle function to give me confidence intervals? Also, how would I go about determining if my data is normally distributed. I mean I can currently tell just by looking at the histogram or pdf from ksdensity, but is there a way to quantitatively measure it? Thanks!

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • R:how to get grep to return the match, rather than the whole string

    - by Mike Dewar
    Hi, I have what is probably a really dumb grep in R question. Apologies, because this seems like it should be so easy - I'm obviously just missing something. I have a vector of strings, let's call it alice. Some of alice is printed out below: T.8EFF.SP.OT1.D5.VSVOVA#4 T.8EFF.SP.OT1.D6.LISOVA#1 T.8EFF.SP.OT1.D6.LISOVA#2 T.8EFF.SP.OT1.D6.LISOVA#3 T.8EFF.SP.OT1.D6.VSVOVA#4 T.8EFF.SP.OT1.D8.VSVOVA#3 T.8EFF.SP.OT1.D8.VSVOVA#4 T.8MEM.SP#1 T.8MEM.SP#3 T.8MEM.SP.OT1.D106.VSVOVA#2 T.8MEM.SP.OT1.D45.LISOVA#1 T.8MEM.SP.OT1.D45.LISOVA#3 I'd like grep to give me the number after the D that appears in some of these strings, conditional on the string containing "LIS" and an empty string or something otherwise. I was hoping that grep would return me the value of a capturing group rather than the whole string. Here's my R-flavoured regexp: pattern <- (?<=\\.D)([0-9]+)(?=.LIS) nothing too complicated. But in order to get what I'm after, rather than just using grep(pattern, alice, value = TRUE, perl = TRUE) I'm doing the following, which seems bad: reg.out <- regexpr( "(?<=\\.D)[0-9]+(?=.LIS)", alice, perl=TRUE ) substr(alice,reg.out,reg.out + attr(reg.out,"match.length")-1) Looking at it now it doesn't seem too ugly, but the amount of messing about it's taken to get this utterly trivial thing working has been embarrassing. Anyone any pointers about how to go about this properly? Bonus marks for pointing me to a webpage that explains the difference between whatever I access with $,@ and attr.

    Read the article

  • How would the 'Model' in a Rails-type webapp be implemented in a functional programming langauge?

    - by ceptorial
    In MVC web development frameworks such as Ruby on Rails, Django, and CakePHP, HTTP requests are routed to controllers, which fetch objects which are usually persisted to a backend database store. These objects represent things like users, blog posts, etc., and often contain logic within their methods for permissions, fetching and/or mutating other objects, validation, etc. These frameworks are all very much object oriented. I've been reading up recently on functional programming and it seems to tout tremendous benefits such as testability, conciseness, modularity, etc. However most of the examples I've seen for functional programming implement trivial functionality like quicksort or the fibonnacci sequence, not complex webapps. I've looked at a few 'functional' web frameworks, and they all seem to implement the view and controller just fine, but largely skip over the whole 'model' and 'persistence' part. (I'm talking more about frameworks like Compojure which are supposed to be purely functional, versus something Lift which conveniently seems to use the OO part of Scala for the model -- but correct me if I'm wrong here.) I haven't seen a good explanation of how functional programming can be used to provide the metaphor that OO programming provides, i.e. tables map to objects, and objects can have methods which provide powerful, encapsulated logic such as permissioning and validation. Also the whole concept of using SQL queries to persist data seems to violate the whole 'side effects' concept. Could someone provide an explanation of how the 'model' layer would be implemented in a functionally programmed web framework?

    Read the article

  • Windows Mobile : How to bind dropdown's selectedvalue to a column in table A and the list data to a

    - by Rob
    Hi, I am trying to learn the basics of Windows Mobile development against SQL CE and have come across a basic problem. I have two tables. One called Customers that stores customer info and has an identity column called ID as the primary key. The other table is called Orders which has a column called CustomerID (the FK constraint is present). I have added a DataSet to the project that contains both tables and have autogenerated the edit/view forms. This has produced a text control for the CustomerID column in the Order table for the new/edit form and I deleted it and replaced it with a dropdown list. Then, using the 'Advanced' databinding options (in Properties) I set the datasource of the list to the Customers table setting the value to the ID field and the text to the CustomerName field. I then set the SelectedValue of the list box to the CustomerID field of the Orders dataset. So far so good. When I run the app in the emulator and view the 'New' form for Orders the Customer dropdown is indeed populated with a list of customer names and I can select one and happily create a new order successfully. This is confirmed when I see the order appear in the Orders Grid form. However, when I then click on the order in the grid and then select 'Edit' the order loads but the dropdown always shows the first customer in the list and doesn't seem to bind the SelectedValue to the Orders dataset CustomerID field. Now I am an ASP.NET guy and normally hand craft the DAL and it's binding to the UI so I'm not entirely sure where to look to investigate what is going wrong here as this is all generated code. I am sure it is something very trivial but any pointers would be appreciated. My gut feeling is that the SelectedValue and the Customers.CustomerID values do not match for some reason? Many thanks, Rob.

    Read the article

  • How to create databinding over two xaml files?

    - by BionicGecko
    Hello, I am trying to come to a working understanding of how databinding works, but even after several tutorials I only have a basic understanding of how databinding works. Thus this question might seem fundamental to those more familiar with silverlight. Even if it is trivial, please point me to some tutorial that deals with this problem. All that I could find simply solved this via adding the data binding on a parent page.xaml (that i must not use in my case). For the sake of this example let us assume, that we have 5 files: starter.cs button1.xaml + codeBehind button2.xaml + codeBehind The two buttons are generated in code in the starter(.cs) file, and then added to some MapLayer button1 my_button1 = new button1(); button2 my_button1 = new button2(); someLayer.Children.Add(my_button1); someLayer.Children.Add(my_button2); My aim is to connect the two buttons, so that they always display the same "text" (i.e. my_button1.content==my_button2.content = true;). Thus when something changes my_button1.content this change should be propagated to the other button (two way binding). At the moment my button1.xaml looks like this: <Grid x:Name="LayoutRoot"> <Button x:Name="x_button1" Margin="0,0,0,0" Content="{Binding ElementName=x_button2, Path=Content}" ClickMode="Press" Click="button1_Click"/> </Grid> But everthing that i get out of that is a button with no content at all, it is just blank as the binding silently fails. How could I create the databinding in the context I described? Preferably in code and not XAML ;) Thanks in advance

    Read the article

  • Linked Measure Groups and Local Dimensions

    - by ekoner
    Mulling over something I've been reading up on. According to Chris Webb, A linked measure group can only be used with dimensions from the same database as the source measure group. So I took this to mean as long as two cubes share a database, a linked measure group can be used with a dimension. So I created a new cube and added a local measure group, a local dimension and a linked measure group. However, I can't create a relationship between the linked measure group and the local dimension even though they are within the same database. I get the message below: Regular relationships in the current database between non-linked (local) dimensions and linked measure groups cannot be edited. These relationship can only be created through the wizard. This dialog can be used to delete these relationships. I see that I can go to the original cube and add the dimension there, but does the message below mean I have an alternative? I just know it's going to be something simple and trivial! Thanks for reading.

    Read the article

  • Ignore case in Python strings

    - by Paul Oyster
    What is the easiest way to compare strings in Python, ignoring case? Of course one can do (str1.lower() <= str2.lower()), etc., but this created two additional temporary strings (with the obvious alloc/g-c overheads). I guess I'm looking for an equivalent to C's stricmp(). [Some more context requested, so I'll demonstrate with a trivial example:] Suppose you want to sort a looong list of strings. You simply do theList.sort(). This is O(n * log(n)) string comparisons and no memory management (since all strings and list elements are some sort of smart pointers). You are happy. Now, you want to do the same, but ignore the case (let's simplify and say all strings are ascii, so locale issues can be ignored). You can do theList.sort(key=lambda s: s.lower()), but then you cause two new allocations per comparison, plus burden the garbage-collector with the duplicated (lowered) strings. Each such memory-management noise is orders-of-magnitude slower than simple string comparison. Now, with an in-place stricmp()-like function, you do: theList.sort(cmp=stricmp) and it is as fast and as memory-friendly as theList.sort(). You are happy again. The problem is any Python-based case-insensitive comparison involves implicit string duplications, so I was expecting to find a C-based comparisons (maybe in module string). Could not find anything like that, hence the question here. (Hope this clarifies the question).

    Read the article

  • howto distinguish composition and self-typing use-cases

    - by ayvango
    Scala has two instruments for expressing object composition: original self-type concept and well known trivial composition. I'm curios what situations I should use which in. There are obvious differences in their applicability. Self-type requires you to use traits. Object composition allows you to change extensions on run-time with var declaration. Leaving technical details behind I can figure two indicators to help with classification of use cases. If some object used as combinator for a complex structure such as tree or just have several similar typed parts (1 car to 4 wheels relation) than it should use composition. There is extreme opposite use case. Lets assume one trait become too big to clearly observe it and it got split. It is quite natural that you should use self-types for this case. That rules are not absolute. You may do extra work to convert code between this techniques. e.g. you may replace 4 wheels composition with self-typing over Product4. You may use Cake[T <: MyType] {part : MyType} instead of Cake { this : MyType => } for cake pattern dependencies. But both cases seem counterintuitive and give you extra work. There are plenty of boundary use cases although. One-to-one relations is very hard to decide with. Is there any simple rule to decide what kind of technique is preferable? self-type makes you classes abstract, composition makes your code verbose. self-type gives your problems with blending namespaces and also gives you extra typing for free (you got not just a cocktail of two elements but gasoline-motor oil cocktail known as a petrol bomb). How can I choose between them? What hints are there? Update: Let us discuss the following example: Adapter pattern. What benefits it has with both selt-typing and composition approaches?

    Read the article

  • Brackets matching using BIT

    - by amit.codename13
    edit: I was trying to solve a spoj problem. Here is the link to the problem : http://spoj.pl/problems/BRCKTS I can think of two possible data structures for solving the problem one using segment tree and the other using a BIT. I have already implemented the solution using a segment tree. I have read about BIT but i can't figure out how to do a particular thing with it(which i have mentioned below) I am trying to check if brackets are balanced in a given string containing only ('s or )'s. I am using a BIT(Binary indexed tree) for solving the problem. The procedure i am following is as follows: I am taking an array of size equal to the number of characters in the string. I am assigning -1 for ) and 1 for ( to the corresponding array elements. Brackets are balanced in the string only if the following two conditions are true. The cumulative sum of the whole array is zero. Minimum cumulative sum is non negative. i.e the minimum of cumulative sums of all the prefixes of the array is non-negative. Checking condition 1 using a BIT is trivial. I am facing problem in checking condition 2.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >