Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 524/595 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • What is the best way to detect white color?

    - by dnul
    I'm trying to detect white objects in a video. The first step is to filter the image so that it leaves only white-color pixels. My first approach was using HSV color space and then checking for high level of VAL channel. Here is the code: //convert image to hsv cvCvtColor( src, hsv, CV_BGR2HSV ); cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 ); for(int x=0;x<srcSize.width;x++){ for(int y=0;y<srcSize.height;y++){ uchar * hue=&((uchar*) (h_plane->imageData+h_plane->widthStep*y))[x]; uchar * sat=&((uchar*) (s_plane->imageData+s_plane->widthStep*y))[x]; uchar * val=&((uchar*) (v_plane->imageData+v_plane->widthStep*y))[x]; if((*val>170)) *hue=255; else *hue=0; } } leaving the result in the hue channel. Unfortunately, this approach is very sensitive to lighting. I'm sure there is a better way. Any suggestions?

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Colorizing as SAS Map

    - by user601828
    I'm trying to generate a map in SAS where I would like to to make gradual color changes which correspond to my results. So the higher the counts the more intense the color changes. Also I would like to add state labels to the map. Here is my code, so far it produces a white map with varying degress of blue blocks. I'd like the states colored in intense colors, like red, bright pink,brilliant, blues and greens. Can anyone please help me modify the code to add state labels and colorize the map, and below the map add a table summarizing the statistics, like counts and percentages. Thanks in advance. goptions gunit=pct cback=white htitle=4 htext=3 colors=(PAGY LIY STY DEGY dark_yellow very_dark_yellow ) ; title "My Map Results"; proc gmap map=maps.us data=My_data all; id state; block person_per_event/levels=6; choro person_per_event/levels=6; run; quit; I looked at his page before for example if I wanted to make a map like this one http://robslink.com/SAS/democd61/election_2012.htm with my data. I tried modifying the code that he gives on the link, but wasnt very successful. I would like to use that map along with the state labels and keep the colors and represent my data with blocks in the corresponding locations with city and state, and high level counts. The rest of the summary statistics I would like to summarize in a colorful table next to the map, like a dashboard of sorts. Appreciate any help in advance. Thanks, -rachel

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • using linq to sql

    - by mazhar
    Well I am new to this orm stuff. We have to create a large project . I read about linq to sql . will it be appropiate to use it in the project of high risk . i found no problem with it personally but the thing is that there will be no going back once started.So i need some feedback from the orm gurus here at the msdn.Will entity framework will be better?( I am in doubt about link to sql because I have read and heard negative feedback here and there) I will be using mvc2 as the framework. So please give the feedback about linq to sql in this regard. q2) Also I am a fan of stored procedure as they are precomputed and fasten up the thing and I have never worked without them.I know that linq to sql support stored procedures but will it be feasible to give up stored procedure seeing the beautiful data access layer generated with little effort as we are also in a need of rapid development. q3) If some changes to some fields required in the database in Link to Sql how will the changes be accommodated in the data access layer.

    Read the article

  • using linq to sql

    - by user324831
    Well I am new to this orm stuff. We have to create a large project . I read about linq to sql . will it be appropiate to use it in the project of high risk . i found no problem with it personally but the thing is that there will be no going back once started.So i need some feedback from the orm gurus here at the msdn.Will entity framework will be better?( I am in doubt about link to sql because I have read and heard negative feedback here and there) I will be using mvc2 as the framework. So please give the feedback about linq to sql in this regard. q2) Also I am a fan of stored procedure as they are precomputed and fasten up the thing and I have never worked without them.I know that linq to sql support stored procedures but will it be feasible to give up stored procedure seeing the beautiful data access layer generated with little effort as we are also in a need of rapid development. q3) If some changes to some fields required in the database in Link to Sql how will the changes be accommodated in the data access layer.

    Read the article

  • Python, Ruby, and C#: Use cases?

    - by thaorius
    Hi everyone. For as long as I can remember, I've always had a "favorite" language, which I use for most projects, until, for some particular reason, there is no way/point on using it for project XYZ. At that point, I find myself rusty (and sometimes outdated) on other languages+libraries+toolchains. So I decided, I would just use some languages/libs/tools for some things, and some for other, effectively keeping them fresh (there would obviously be exceptions, I'm not looking for an arbitrary rule set, but some guidelines). I wanted an opinion on what would be your standard use cases (new projects) for Python, Ruby, and C# (Mono). At the moment, I have time like this:Languages: C#: Mid-Large Sized Projects (mainly server-side daemons) High Performance (I hardly ever need C's performance, but Python just doesn't cut it) Relatively Low Footprint (vs the JVM, for example) Ruby: Web Applications Python: General Use Scripts (automation, system config, etc) Small-Mid Sized Projects Prototyping Web Applications About Ruby, I have no idea what to use it for that I can't use Python for (specially considering Python is more easily found installed by default). And I like both languages (though I'm really new to Ruby), which makes things even worse. As for C#, I have not used a Windows powered computer in a few years, I don't make things for Windows computers, and I don't mind waiting for Mono to implement some new features. That being said, I haven't found many people on the internet using it for server-sided *nix programming (not web related). I would appreciate some insight on this too. Thanks for your time.

    Read the article

  • How does mercurial's bisect work when the range includes branching?

    - by Joshua Goldberg
    If the bisect range includes multiple branches, how does hg bisect's search work. Does it effectively bisect each sub-branch (I would think that would be inefficient)? For instance, borrowing, with gratitude, a diagram from an answer to this related question, what if the bisect got to changeset 7 on the "good" right-side branch first. @ 12:8ae1fff407c8:bad6 | o 11:27edd4ba0a78:bad5 | o 10:312ba3d6eb29:bad4 |\ | o 9:68ae20ea0c02:good33 | | | o 8:916e977fa594:good32 | | | o 7:b9d00094223f:good31 | | o | 6:a7cab1800465:bad3 | | o | 5:a84e45045a29:bad2 | | o | 4:d0a381a67072:bad1 | | o | 3:54349a6276cc:good4 |/ o 2:4588e394e325:good3 | o 1:de79725cb39a:good2 | o 0:2641cc78ce7a:good1 Will it then look only between 7 and 12, missing the real first-bad that we care about? (thus using "dumb" numerical order) or is it smart enough to use the full topography and to know that the first bad could be below 7 on the right-side branch, or could still be anywhere on the left-side branch. The purpose of my question is both (a) just to understand the algorithm better, and (b) to understand whether I can liberally extend my initial bisect range without thinking hard about what branch I go to. I've been in high-branching bisect situations where it kept asking me after every test to extend beyond the next merge, so that the whole procedure was essentially O(n). I'm wondering if I can just throw the first "good" marker way back past some nest of merges without thinking about it much, and whether that would save time and give correct results.

    Read the article

  • Postgres error with Sinatra/Haml/DataMapper on Heroku

    - by sevennineteen
    I'm trying to move a simple Sinatra app over to Heroku. Migration of the Ruby app code and existing MySQL database using Taps went smoothly, but I'm getting the following Postgres error: PostgresError - ERROR: operator does not exist: text = integer LINE 1: ...d_at", "post_id" FROM "comments" WHERE ("post_id" IN (4, 17,... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. It's evident that the problem is related to a type mismatch in the query, but this is being issued from a Haml template by the DataMapper ORM at a very high level of abstraction, so I'm not sure how I'd go about controlling this... Specifically, this seems to be throwing up on a call of p.comments from my Haml template, where p represents a given post. The Datamapper models are related as follows: class Post property :id, Serial ... has n, :comments end class Comment property :id, Serial ... belongs_to :post end This works fine on my local and current hosted environment using MySQL, but Postgres is clearly more strict. There must be hundreds of Datamapper & Haml apps running on Postgres DBs, and this model relationship is super-conventional, so hopefully someone has seen (and determined how to fix) this. Thanks!

    Read the article

  • Delphi: Error when starting MCI

    - by marco92w
    I use the TMediaPlayer component for playing music. It works fine with most of my tracks. But it doesn't work with some tracks. When I want to play them, the following error message is shown: Which is German but roughly means that: In the project pMusicPlayer.exe an exception of the class EMCIDeviceError occurred. Message: "Error when starting MCI.". Process was stopped. Continue with "Single Command/Statement" or "Start". The program quits directly after calling the procedure "Play" of TMediaPlayer. This error occurred with the following file for example: file size: 7.40 MB duration: 4:02 minutes bitrate: 256 kBit/s I've encoded this file with a bitrate of 128 kBit/s and thus a file size of 3.70 MB: It works fine! What's wrong with the first file? Windows Media Player or other programs can play it without any problems. Is it possible that Delphi's TMediaPlayer cannot handle big files (e.g. 5 MB) or files with a high bitrate (e.g. 128 kBit/s)? What can I do to solve the problem?

    Read the article

  • Cross-Platform Camera API

    - by Karim
    Hi, I'm now building a video transforming filter that have to transform video frames in real-time. One of the key requirements of the filter is to have high performance to minimize the number of dropped frames during the transform. Another requirement that is of lower priority but also nice to have is to make it cross-platform (both PC's and Mobile devices). The application is built in C++. Now my question is: is there any API that is more portable and has a similar or better performance characteristics than DirectShow? as DirectShow's portability is only limited to Windows-based devices (PCs and Windows Mobile&CE platforms). Also I've notices that for example using HTC's custom camera API has far better performance than what DirectShow offers. If you want to check this, try to build a filter in DirectShow that will multiply each color by 2 and render that in real-time from camera on the screen. Then do the same with HTC's API. There is almost 4-5x performance boost with vendor's specific API. So it'd be very nice if the library used the device-specific implementation of the driver, as performance is critical when doing this transforms on a mobile device (which is about ~500 MHz).

    Read the article

  • Reading Unicode files line by line C++

    - by Roger Nelson
    What is the correct way to read Unicode files line by line in C++? I am trying to read a file saved as Unicode (LE) by Windows Notepad. Suppose the file contains simply the characters A and B on separate lines. In reading the file byte by byte, I see the following byte sequence (hex) : FE FF 41 00 0D 00 0A 00 42 00 0D 00 0A 00 So 2 byte BOM, 2 byte 'A', 2byte CR , 2byte LF, 2 byte 'B', 2 byte CR, 2 byte LF . I tried reading the text file using the following code: std::wifstream file("test.txt"); file.seekg(2); // skip BOM std::wstring A_line; std::wstring B_line; getline(file,A_line); // I get "A" getline(file,B_line); // I get "\0B" I get the same results using operator instead of getline file >> A_line; file >> B_line; It appears that the single byte CR character is is being consumed only as the single byte. or CR NULL LF is being consumed but not the high byte NULL. I would expect wifstream in text mode would read the 2byte CR and 2byte LF. What am I doing wrong? It does not seem right that one should have to read a text file byte by byte in binary mode just to parse the new lines.

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • How should a process running outside Internet Explorer talk to JavaScript running inside it?

    - by Paul Crowley
    This may well be impossible; either way I'm sure people here will know. I'm trying to reduce three "OK" prompts to two. As part of my application, users will download and run an executable I supply; call it privileged.exe. privileged.exe will have a coda that asks for the highest privileges available. That's two OK prompts, one to run privileged.exe and one for UAC. I'd like privileged.exe to then install an ActiveX control, browser plugin or some such. The purpose of this ActiveX control is to allow the JavaScript running inside the browser to talk to the running privileged.exe process, to allow the user to perform certain operations that require high privileges by making choices in the browser. And I'd ideally like this to happen without the user having to restart their browser or explicitly OK the installation of the ActiveX control. Is that possible? Can you install an ActiveX control from outside the browser in such a way that it becomes immediately available to pages running inside the browser? Or should I give up, and allow the user to be explicitly prompted to install the ActiveX control?

    Read the article

  • how to set a fixed color bar for pcolor in python matplotlib?

    - by user248237
    I am using pcolor with a custom color map to plot a matrix of values. I set my color map so that low values are white and high values are red, as shown below. All of my matrices have values between 0 and 20 (inclusive) and I'd like 20 to always be pure red and 0 to always be pure white, even if the matrix has values that don't span the entire range. For example, if my matrix only has values between 2 and 7, I don't want it to plot 2 as white and 7 as red, but rather color it as if the range is still 0 to 20. How can I do this? I tried using the "ticks=" option of colorbar but it did not work. Here is my current code (assume "my_matrix" contains the values to be plotted): cdict = {'red': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 1.0, 1.0)), 'green': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 0.0, 0.0)), 'blue': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 0.0, 0.0))} my_cmap = matplotlib.colors.LinearSegmentedColormap('my_colormap', cdict, 256) colored_matrix = plt.pcolor(my_matrix, cmap=my_cmap) plt.colorbar(colored_matrix, ticks=[0, 5, 10, 15, 20]) any idea how I can fix this to get the right result? thanks very much.

    Read the article

  • TextField Covering UIAlertView's Button.

    - by XcodeDev
    Hi, I am using a UIAlertView with three buttons: "Dismiss", "Submit Score" and @"View Leaderboard". The UIAlertView also contains a UITextField called username. At the moment the UITextField "username" is covering one of the buttons in the UIAlertView. I just wanted to know how I could stop the UITextField from covering one of the buttons, i.e move the buttons down. Here is an image of what is happening: And here is my code: [username setBackgroundColor:[UIColor whiteColor]]; [username setBorderStyle:UITextBorderStyleRoundRect]; username.backgroundColor = [UIColor clearColor]; username.returnKeyType = UIReturnKeyDone; username.keyboardAppearance = UIKeyboardAppearanceAlert; username.placeholder = @"Enter your name here"; username = [[UITextField alloc] initWithFrame:CGRectMake(20.0, 45.0, 245.0, 25.0)]; username.borderStyle = UITextBorderStyleRoundedRect; [username resignFirstResponder]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Congratulations" message:[NSString stringWithFormat:@"You tapped %i times in %i seconds!\n", tapAmount, originalCountdownTime] delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:@"Submit To High Score Leaderboard", @"View Leaderboard", nil]; alert.tag = 01; [alert addSubview:username]; [alert show]; [alert release];

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • Disposing ActiveX resources owned by another thread

    - by Stefan Teitge
    I've got a problem problem with threading and disposing resources. I've got a C# Windows Forms application which runs expensive operation in a thread. This thread instantiates an ActiveX control (AxControl). This control must be disposed as it uses a high amount of memory. So I implemented a Dispose() method and even a destructor. After the thread ends the destructor is called. This is sadly called by the UI thread. So invoking activexControl.Dispose(); fails with the message "COM object that has been separated from its underlying RCW", as the object belongs to another thread. How to do this correctly or is it just a bad design I use? (I stripped the code down to the minimum including removing any safety concerns.) class Program { [STAThread] static void Main() { // do stuff here, e.g. open a form new Thread(new ThreadStart(RunStuff); // do more stuff } private void RunStuff() { DoStuff stuff = new DoStuff(); stuff.PerformStuff(); } } class DoStuff : IDisposable { private AxControl activexControl; DoStuff() { activexControl = new AxControl(); activexControl.CreateControl(); // force instance } ~DoStuff() { Dispose(); } public void Dispose() { activexControl.Dispose(); } public void PerformStuff() { // invent perpetuum mobile here, takes time } }

    Read the article

  • Have you considered doing revenue sharing to fund development of a mobile app? How would you do it?

    - by Brennan
    I am looking to build multiple mobile apps which leverage existing content and resources by enabling these mobile apps with web services. I will duplication much of the same features which are also in place and add more features that are possible on a mobile device like address book, maps and calendar integration to make the service much more useful. To fund these projects I see that I have 2 options. First I could simply quote them for the project based on my hourly rate and the estimate in hours that I will take the to complete the job. That may be a high number. The second option would be to do shared revenue with ads placed in the app. I could then take a percentage of any revenue that is generated from the app. There is also a hybrid where I might charge for a percentage of the estimated quote and then take a percentage of the revenue sharing. So my question is how much should I propose for the revenue sharing? Should it be 30%? Or maybe I should make it 70% up to a point that a certain dollar amount is reached? And should the revenue sharing agreement be for 12 months, 24 months or more? Should I include in the proposal an agreement that they will help promote this app with their content and resources? Ultimately this system will benefit both sides because it extends their reach into the mobile space instead of where they are currently with just print and web. I have tried to find some examples with a few Google searches but I keep hitting content about the Google and Apple revenue sharing models. I would like to get some solid examples that are working to compare against so that my proposal do build these apps is not completely off base.

    Read the article

  • WPF data-bound ComboBox only shows first item of ItemsSource list

    - by Mark
    Hi all, I'm sure I'm doing something stupid but for the life of me I can't think of right now. I have a ComboBox that is data-bound to a list of Layout objects. The list is initially empty but things are added over time. When the list is updated by the model the first time, this update reflects properly in the ComboBox. However, subsequent updates never show up in the ComboBox even though I can see that the list itself contains these items. Since the first update works, I know the data-binding is OK - so what am I doing wrong here? Here's the XAML (abridged): <Grid HorizontalAlignment="Stretch"> <ComboBox ItemsSource="{Binding Path=SavedLayouts, diagnostics:PresentationTraceSources.TraceLevel=High}" DisplayMemberPath="Name" SelectedValuePath="Name" SelectedItem="{Binding LoadLayout}" Height="25" Grid.Row="1" Grid.Column="0"></ComboBox> </Grid> And the related part of the model: public IList<Layout> SavedLayouts { get { return _layouts; } } public Layout SaveLayout( String data_ ) { Layout theLayout = new Layout( SaveLayoutName ); _layouts.Add( theLayout ); try { return theLayout; } finally { PropertyChangedEventHandler handler = PropertyChanged; if( handler != null ) { handler( this, new PropertyChangedEventArgs( "SavedLayouts" ) ); } } } And finally, the layout class (abridged): public class Layout { public String Name { get; private set; } } In the output window, I can see the update occurring: System.Windows.Data Warning: 91 : BindingExpression (hash=64564967): Got PropertyChanged event from TickerzModel (hash=43624632) System.Windows.Data Warning: 97 : BindingExpression (hash=64564967): GetValue at level 0 from TickerzModel (hash=43624632) using RuntimePropertyInfo(SavedLayouts): List`1 (hash=16951421 Count=11) System.Windows.Data Warning: 76 : BindingExpression (hash=64564967): TransferValue - got raw value List`1 (hash=16951421 Count=11) System.Windows.Data Warning: 85 : BindingExpression (hash=64564967): TransferValue - using final value List`1 (hash=16951421 Count=11) But I get do not get this 11th item in the ComboBox. Any ideas?

    Read the article

  • problem creating jpg thumb with php

    - by Ross
    Hi, I am having problems creating a thumbnail from an uploaded image, my problem is (i) the quality (ii) the crop http://welovethedesign.com.cluster.cwcs.co.uk/phpimages/large.jpg http://welovethedesign.com.cluster.cwcs.co.uk/phpimages/thumb.jpg If you look the quality is very poor and the crop is taken from the top and is not a resize of the original image although the dimesions mean it is in proportion. The original is 1600px wide by 1100px high. Any help would be appreciated. $thumb = $targetPath."Thumbs/".$fileName; $imgsize = getimagesize($targetFile); $image = imagecreatefromjpeg($targetFile); $width = 200; //New width of image $height = 138; //This maintains proportions $src_w = $imgsize[0]; $src_h = $imgsize[1]; $thumbWidth = 200; $thumbHeight = 138; // Intended dimension of thumb // Beyond this point is simply code. $sourceImage = imagecreatefromjpeg($targetFile); $sourceWidth = imagesx($sourceImage); $sourceHeight = imagesy($sourceImage); $targetImage = imagecreate($thumbWidth,$thumbHeight); imagecopyresized($targetImage,$sourceImage,0,0,0,0,$thumbWidth,$thumbWidth,imagesx($sourceImage),imagesy($sourceImage)); //imagejpeg($targetImage, "$thumbPath/$thumbName"); imagejpeg($targetImage, $thumb); chmod($thumb, 0755);

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >