Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 74/836 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Apache on linux : spawning processes or threads ?

    - by Jerome WAGNER
    Hello, I would like to understand better exactly what is going on when Apache on linux receive an HTTP request in a process pre-fork model. Let's say we have 20 Apache child processes waiting. When I receive an HTTP request, is it true to say that 1 child process will be chosen to handle the request and that this process won't handle another request from another user until the first one is finished ? I am asking the question because of a PHP limitation that states : The locale information is maintained per process, not per thread. If you are running PHP on a multithreaded server API like IIS or Apache on Windows, you may experience sudden changes in locale settings while a script is running, though the script itself never called setlocale(). This happens due to other scripts running in different threads of the same process at the same time, changing the process-wide locale using setlocale(). Thanks Jerome Wagner

    Read the article

  • How do i a vector data using ARM Neon intrinsics?

    - by goldenmean
    Hello, This is specifically related to ARM Neon SIMD coding.I am using ARM Neon instrinsics for certain module in a Video Decoder. I have a vectorized data as follows:- There are four, 32 bit elements in a Neon register say Q0 which is of size 128 bit. 3B 3A 1B 1A There are another four, 32 bit elements in other Neon register say Q1 which is of size 128 bit. 3D 3C 1D 1C I want the final data to be in order as shown below:- 1D 1C 1B 1A 3D 3C 3D 3A Using what Neon instrinscis can achive the desired data order? thanks, -AD

    Read the article

  • Change in Job Title and Responsibilities

    - by John Conwell
    I've spent the past 7 years focused primarily on code and database performance.  It's an area that I have a passion for, as well as a propensity.  But what I've found is that its very hard to change the culture of a development environment.  You can teach performance, you can encourage performance, you might see slight shift in how devs think about performance.  But without full management backing and support you wont get long lasting changes in the development culture.  And in the end, you are back to being the "Perf Guy", fixing performance design flaws, after the fact, one by one by one. Which is why last year I asked my boss to changed my title and responsibilities to more naturally align with the team I was working for.  So now I'm a Computing Research Engineer (vague, I know), researching in the field of Big Data analytics and visualization. I've found this change revitalizing and a lot of fun.  And given the nature of Big Data (its, um…big) the performance aspects are always ever present.

    Read the article

  • Timer or Thread?

    - by Pradeep Singh
    I have a method(say method1) that writes to database(sqlserver)and another method(say method2) that tries to access the same database after some time and updates the data row that was created by method1. The problem arises when method1 fails to access db due to the LAN being disconnected (this is not an exception this is a scenario that will definitely arise in my software, getting into details will make the question too complex) if method1 fails to access db method2 cannot work. What I want to do is to make method1 store values to local db instead of server if the LAN is disconnected and as soon as it enters value in local db the application should start trying to access the server after ever 10-15 seconds. What should I use timer or create a new thread?

    Read the article

  • create a random sequence, skip to any part of the sequence

    - by Michael Xu
    Hi everyone, In Linux. There is an srand() function, where you supply a seed and it will guarantee the same sequence of pseudorandom numbers in subsequent calls to the random() function afterwards. Lets say, I want to store this pseudo random sequence by remembering this seed value. Furthermore, let's say I want the 100 thousandth number in this pseudo random sequence later. One way, would be to supply the seed number using srand(), and then calling random() 100 thousand times, and remembering this number. Is there a better way of skipping all 99,999 other numbers in the pseudo random list and directly getting the 100 thousandth number in the list. thanks, m

    Read the article

  • Automatic type conversion in Java?

    - by davr
    Is there a way to do automatic implicit type conversion in Java? For example, say I have two types, 'FooSet' and 'BarSet' which both are representations of a Set. It is easy to convert between the types, such that I have written two utility methods: /** Given a BarSet, returns a FooSet */ public FooSet barTOfoo(BarSet input) { /* ... */ } /** Given a FooSet, returns a BarSet */ public BarSet fooTObar(FooSet input) { /* ... */ } Now say there's a method like this that I want to call: public void doSomething(FooSet data) { /* .. */ } But all I have is a BarSet myBarSet...it means extra typing, like: doSomething(barTOfoo(myBarSet)); Is there a way to tell the compiler that certain types can automatically be cast to other types? I know this is possible in C++ with overloading, but I can't find a way in Java. I want to just be able to type: doSomething(myBarSet); And the compiler knows to automatically call barTOfoo()

    Read the article

  • general database modeling and django specific modeling

    - by Shreko
    I'm wondering what is the best way to model something like the following. Lets say my company sells metal bars (parameters/fields are: length, profile_type, quantity etc.) of different profiles, where profiles may be pipe(pipe_diameter, wall_thickness) or hollow_rectangle(base, height, wall_thickness), or maybe some other profile with different parameters. Lets say maximum number of profiles would be 12, each profile having between 2-5 parameters. Should everything be in a single table like table_bars: id, length, quantity, profile_type, pipe_diameter, wall_thickness, base, height, etc.) where profile type would be (pipe, rectangle etc.) or should every shape have its own table with its own parameters and in table_bars keep only id, length, quantity profile_type and profile_id) and are there any django specific issues is multiple tables are the best answer? Thanks

    Read the article

  • UiElement based on another class

    - by plotnick
    I placed a control into a grid. let's say the control is derived from public class 'ButBase' which is derived in its turn from System.Windows.Controls.Button. The code normally compiles and app works just fine. But there's something really annoying. When you try to switch to xaml-design tab it will say 'The document root element is not supported by the visual designer', which is normal and I'm totally okay with that, but the thing is, that all the xaml code is underlined and VS2010 says: 'Cannot create an instance of ButBase' although still normally compiles and able to run. I've tried the same code in VS2008, it said that needs to see a public parameterless constructor in the ButBase, and even after I put one it showed the same error. What do I miss here?

    Read the article

  • FLEX: Make LineChart DATATIP constrain to vertical axis.

    - by user60310
    When making a line chart, Lets say its for business sales for different depts and horizontally is days and vertically is dollars. When you hover over a line it tells a dataTip tells you the sales for that dept. on that day. I want it to show all the depts at the same time, so say you hover over day 3, I want the dataTips for all depts on day 3 to display so you can compare the values for all the sales on the same day. I set the mouseSensitivity for the dataTips to display all the lines at once but I end up getting day 2 for one dept and day 3 for another which is not wanted. This is actually posted as a bug and explained better here: http://bugs.adobe.com/jira/browse/FLEXDMV-1853 I am wondering if anyone can come up with a work-around for this? Thanks!

    Read the article

  • Cassandra/HBase or just MySQL: Potential problems doing the next thing

    - by alexeypro
    Say I have "user". It's the key. And I need to keep "user count". I am planning to have record with key "user" and value "0" to "9999+ ;-)" (as many as I'll have). What problems I will drive in if I use Cassandra, HBase or MySQL for that? Say, I have thousand of new updates to this "user" key, where I need to increment the value. Am I in trouble? Locked for writes? Any other way of doing that? Why this is done -- there will be a lot of "user"-like keys. Different other cases. But the idea is the same. Why keep it this way -- because I'll have more reads, so I can always get "counted value" very fast.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • What useful GDB scripts have you used/written?

    - by nik
    People use gdb on and off for debugging, of course there are lots of other debugging tools across the varied OSes, with and without GUI and, maybe other fancy IDE features. I would like to know what useful gdb scripts you have written and liked. While, I do not mean a dump of commands in a something.gdb file that you source to pull out a bunch of data, if that made your day, go ahead and talk about it. Lets think conditional processing, control loops and functions written for more elegant and refined programming to debug and, maybe even for whitebox testing Things get interesting when you start debugging remote systems (say, over a serial/ethernet interface) And, what if the target is a multi-processor (and, multithreaded) system Let me put a simple case as an example... Say, A script that traversed serially over entries to locate a bad entry in a large hash-table that is implemented on an embedded platform. That helped me debug a broken hash-table once.

    Read the article

  • Programmatically modifying an embedded resource before registering/referencing it on the page

    - by Jimock
    Hi All, Firstly, "Modifying" may be the wrong term, I see a few people have posted online just asking whether they can actually modify an embedded resource. What I am wanting to to, is use a resource in my assembly as a kind of template which I would do a find and replace on before registering it on the page - is this possible? For example; say I have a few lines of jQuery as an embedded resource in my assembly and in this script I am referencing a CSS class name that can be set by the front-end programmer. Since I do not know what the CSS class will be until implementation, is there a way of going through the embedded resource and replacing, say, $myclass$ with ThisClassName. Any help would be appreciated, if it's not possible then at least tell me so I can stop chasing my tail. Cheers, James

    Read the article

  • Lucene and .NET Part I

    - by javarg
    I’ve playing around with Lucene.NET and trying to get a feeling of what was required to develop and implement a full business application using it. As you would imagine, many things are required for you to implement a robust solution for indexing content and searching it afterwards. Lucene is a great and robust solution for indexing content. It offers fast and performance enhanced search engine library available in Java and .NET. You will want to use this library in many particular scenarios: In Windows Azure, to support Full Text Search (a functionality not currently supported by SQL Azure) When storing files outside or not managed by your database (like in large document storage solutions that uses File System) When Full Text Search is not really what you need Lucene is more than a Full Text Search solution. It has several analyzers that let you process and search content in different ways (decomposing sentences, deriving words, removing articles, etc.). When deciding to implement indexing using Lucene, you will need to take into account the following: How content is to be indexed by Lucene and when. Using a service that runs after a specific interval Immediately when content changes When content is to available for searching / Availability of indexed content (as in real time content search) Immediately when content changes = near real time searching After a few minutes.. Ease of maintainability and development Some Technical Concerns.. When indexing content, indexes are locked for writing operations by the Index Writer. This means that Lucene is best designed to index content using single writer approach. When searching, Index Readers take a snapshot of indexes. This has the following implications: Setting up an index reader is a costly task. Your are not supposed to create one for each query or search. A good practice is to create readers and reuse them for several searches. The latter means that even when the content gets updated, you wont be able to see the changes. You will need to recycle the reader. In the second part of this post we will review some alternatives and design considerations.

    Read the article

  • Can mediatomb VLC profile transcode audio as MP3 rather than mpga ?

    - by djangofan
    In the /etc/mediatomb/config.xml, can mediatomb VLC profile transcode audo as MP3 rather than mpga ? My Sony GoogleTV wont render streamed .avi files files with a mpga audio in them. The original files are Divx encoded with 128kbs MP3 audio but mediatomb is transcoding them. How can I change this? Any ideas? Can I turn off the audio and video transcoding somehow? I need some ideas to try. <profile name="vlcprof" enabled="yes" type="external"> <mimetype>video/mpeg</mimetype> <agent command="vlc" arguments="-I dummy %in --sout #transcode{venc=ffmpeg,vcodec=mp2v,vb=4096,fps=25,aenc=ffmpeg,acodec=mpga,ab=192,samplerate=44100,channels=2}:standard{access=file,mux=ps,dst=%out} vlc:quit"/> <buffer size="10485760" chunk-size="131072" fill-size="2621440"/> <accept-url>yes</accept-url> <first-resource>yes</first-resource> </profile> I know that that MP3 encoding support is external to FFmpeg and must be configured appropriately, but I have no idea how to handle that. I would guess I can work around that by somehow telling ffmpeg to not transcode the audio stream? Also, should I create a separate vlcprof entry for video/avi ? Can you create more than one profile for VLC in the config.xml for mediatomb?

    Read the article

  • Generic HTTP Handler in ASP.Net

    - by Bruno Brant
    Hello all, I want to write a custom HTTP Handler in ASP.Net (I'm using C# currently) that filters all requests to, say, .aspx files, and then, depending on the page name that comes with the requests, I redirect the user to a page. So far, I've written a handler that filter "*", that is, everything. Let's say I receive a request for "Page.aspx", and want to send the user to "AnotherPage.aspx". So I call Redirect on that response and pass "AnotherPage.aspx" as the new page. The problem is that this will once more trigger my handler, which will do nothing. This will leave the user without any response. So, is there a way to send the request to the other handlers (cascade the message) once I've dealt with it? Thanks, Bruno

    Read the article

  • How to transpose rows and columns in Access 2003?

    - by Lisa Schwaiger
    How do I transpose rows and columns in Access 2003? I have a multiple tables that I need to do this on. (I've reworded my question because feedback tells me it was confusing how I originally stated it.) Each table has 30 fields and 20 records. Lets say my fields are Name, Weight, Zip Code, Quality4, Quality5, Quality6 through Quality30 which is favorite movie. Let's say the records each describe a person. The people are Alice, Betty, Chuck, Dave, Edward etc through Tommy.. I can easily make a report like this: >>Alice...120....35055---etc, etc, etc...Jaws Betty....125....35212...etc, etc, etc...StarWars etc etc etc Tommy...200...35213...etc, etc, etc...Adaptation But what I would like to do is transpose those rows and columns so my report displays like this >>Alice........Betty......etc,etc,etc...Tommy 120.........125........etc, etc, etc...200 35055.....35212....etc, etc, etc...35213 etc etc etc Jaws...StarWars..etc,etc,etc...Adaptation Thanks for any help.

    Read the article

  • HttpWebRequest.GetResponse() - what specific status codes cause an exception to be thrown?

    - by H. Morrow
    I've hunted around for some definitive documentation on this but haven't had much luck finding any. Basically - the question is - for which HTTP Status codes coming back from the server will HttpWebRequest.GetResponse() generate a WebException after doing something like say, a POST? Specifically - will it generate a WebException for anything other than status 200 OK? Or will it only generate a WebException for say, 400, 404, and 500 (for the sake of argument). I want to know since, the server I'm communicating with defines anything other than HTTP 200 OK coming back as an error condition - and the key is, can I rely on a WebException being generated for anything other than 200? (I've currently written my code so that it'll actually check the return status code everytime to ensure it's 200 OK and if it's not, take appropriate action - but it's alot of duplication between that, and the catch block for a WebException, and I'm hoping to clean it up...) Any relevant links to documentation would be most appreciated. Thanks!

    Read the article

  • How to dynamically compose a bitmask?

    - by mystify
    Lets say I have to provide an value as bitmask. NSUInteger options = kFoo | kBar | kFooBar; and lets say that bitmask is really huge and may have 100 options. But which options I have, depends on a lot of situations. How could I dynamically compose such a bitmask? Is this valid? NSUInteger options; if (foo) { options = options | kFoo; } if (bar) { options = options | kBar; } if (fooBar) { options = options | kFooBar; } (despite the fact that this would probably crash when doing that | bitmask operator thing to "nothing".

    Read the article

  • Filtering MySQL query result according to a interval of timestamp

    - by celalo
    Let's say I have a very large MySQL table with a timestamp field. So I want to filter out some of the results not to have too many rows because I am going to print them. Let's say the timestamps are increasing as the number of rows increase and they are like every one minute on average. (Does not necessarily to be exactly once every minute, ex: 2010-06-07 03:55:14, 2010-06-07 03:56:23, 2010-06-07 03:57:01, 2010-06-07 03:57:51, 2010-06-07 03:59:21 ...) As I mentioned earlier I want to filter out some of the records, I do not have specific rule to do that, but I was thinking to filter out the rows according to the timestamp interval. After I achieve filtering I want to have a result set which has a certain amount of minutes between timestamps on average (ex: 2010-06-07 03:20:14, 2010-06-07 03:29:23, 2010-06-07 03:38:01, 2010-06-07 03:49:51, 2010-06-07 03:59:21 ...) Last but not least, the operation should not take incredible amount of time, I need this functionality to be almost fast as a normal select operation. Do you have any suggestions?

    Read the article

  • Only show content when certain criteria is met?

    - by Elliot
    I'm wondering if theres a best practice for what I'm trying to accomplish... First we have the model categories, categories, has_many posts. Now lets say, users add posts. Now, I have a page, that I want to display only the current user's posts by category. Lets say we have the following categories: A, B, and C User 1, has posted in Categories A and B. In my view I have something like: @categories.each do |category| category.name @posts.each do |post| if post.category_id==category.id post content here end end end The problem with this, is I'm going to show the empty category, as well as the categories that do have content. Is there a more efficient way of going about this? As I don't want to show the empty categories. Best, Elliot

    Read the article

  • Ruby on Rails - Belongs_to and Active Admin not creating foreign ID

    - by Milo
    I have the following setup: class Category < ActiveRecord::Base has_many :products end class Product < ActiveRecord::Base belongs_to :category has_attached_file :photo, :styles => { :medium => "300x300>", :thumb => "200x200>" } validates_attachment_content_type :photo, :content_type => /\Aimage\/.*\Z/ end ActiveAdmin.register Product do permit_params :title, :price, :slideshow, :photo, :category form do |f| f.inputs "Product Details" do f.input :title f.input :category f.input :price f.input :slideshow f.input :photo, :required => false, :as => :file end f.actions end show do |ad| attributes_table do row :title row :category row :photo do image_tag(ad.photo.url(:medium)) end end end index do column :id column :title column :category column :price do |product| number_to_currency product.price end actions end class ProductController < ApplicationController def create @product = Product.create(params[:id]) end end Every time I make a product item in activeadmin the category comes up empty. It wont populate the column category_id for the product. It just leaves it empty. What am I doing wrong?

    Read the article

  • text indexes vs integer indexes in mysql

    - by imanc
    Hey, I have always tried to have an integer primary key on a table no matter what. But now I am questioning if this is always necessary. Let's say I have a product table and each product has a globally unique SKU number - that would be a string of say 8-16 characters. Why not make this the PK? Typically I would make this field a unique index but then have an auto incrementing int field as the PK, as I assumed it would be faster, easier to maintain, and would allow me to do things like get the last 5 records added with ease. But in terms of optimisation, assuming I'd only ever be matching the full text field and next doing text matching queries (e.g. like %%) can you guys think of any reasons not to use a text based primary key, most likely of type varchar()? Cheers, imanc

    Read the article

  • XML Table layout? Two EQUAL-width rows filled with equally width buttons??

    - by Oliver Goossens
    Hi heres a part from my XML for LAND format: <TableLayout android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_gravity="center" android:stretchColumns="*"> <TableRow> <Button android:id="@+id/countbutton" android:text="@string/plus1"/> <Button android:id="@+id/resetbutton" android:text="@string/reset" /> </TableRow> </TableLayout> And now what I dont get - the WIDTH of one row and also of the button depends on the TEXT inside the button. If the both texts are equaly long lets say : TEXT its ok - the table half is in the middle of the screen. But if they have different size - lets say "A" and "THIS IS THE LONG BUTTON" the CENTER of the table isnt in the middle of the screen anymore and so the buttons are not equally width... Cant find any solution... Please advise... Thank you Oliver Goossens

    Read the article

  • Ordered many-to-many relationship in NHibernate

    - by Kristoffer
    Let's say I have two classes: Item and ItemCollection, where ItemCollection contains an ordered list of Item objects with an index, i.e. the list is ordered in a way specified by the user. Let's also say that they have a many-to-many relationship, an ItemCollection can contain many items and an Item can belong to several ItemCollections. That would, in my head, require three tables in the database. One for Item, one for ItemCollection and one for the ordered mapping. The mapping table would contain three columns: int ItemID int ItemCollectionID int ListIndex QUESTION: How would you design the ItemCollection class? Should the list of Item objects be a list, dictionary or other? What would the NHibernate mapping look like to get the ListIndex into the picture?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >