Search Results

Search found 7897 results on 316 pages for 'generate'.

Page 75/316 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Absolute URL Generation with Subdirectory in Rails

    - by Hulihan Applications
    Hey Guys - I'm trying to find an easy way(without having to write any plugins or overrides to Rails) to generate an absolute url to a rails application being in a subdirectory. This url is going to be generated for theme images, not links, so I can't url link_to or url_for. Here's an example of what I want to do: <%= theme_image("delete_icon.png", :theme => "my_theme") %> If my app is running at http://localhost, This would return: <img src="/themes/my_theme/images/delete_icon.png"> but If my app is running at http://localhost/myapp, This would return: <img src="/myapp/themes/my_theme/images/delete_icon.png"> Can anyone point me in the right direction to generate a dynamic, absolute url to a non-routed resource?

    Read the article

  • Hibernate won't autogenerate sequence table

    - by Jason
    I'm trying to use a sequence table to generate keys for my entities. I notice that if I just use the @GeneratedValue(strategy=GenerationType.TABLE) with no explicit generator, Hibernate will automatically create the hibernate_sequences table in my DB if it doesn't exist. This is great. However, I wanted to make some changes to the sequence table, so I created a @TableGenerator like the following: @GeneratedValue(strategy=GenerationType.TABLE, generator="vdat_seq") @TableGenerator(name="vdat_seq", table="VDAT_SEQ", pkColumnName="seq_name", valueColumnName="seq_next_val", allocationSize=1) This works fine if I manually create the VDAT_SEQ table in my schema; Hibernate won't auto-create it anymore. This causes an issue in my unit tests, since I'd rather not have to manually create a table and maintain it on our testing DB. Is there a configuration variable or some other way to get Hibernate to generate the sequence table?

    Read the article

  • What's some simple F# code that generates the .tail IL instruction?

    - by kld2010
    I'd like to see the .tail IL instruction, but the simple recursive functions using tail calls that I've been writing are apparently optimized into loops. I'm actually guessing on this, as I'm not entirely sure what a loop looks like in Reflector. I definitely don't see any .tail opcodes though. I have "Generate tail calls" checked in my project's properties. I've also tried both Debug and Release builds in Reflector. The code I used is from Programming F# by Chris Smith, page 190: let factorial x = // Keep track of both x and an accumulator value (acc) let rec tailRecursiveFactorial x acc = if x <= 1 then acc else tailRecursiveFactorial (x - 1) (acc * x) tailRecursiveFactorial x 1 Can anyone suggest some simple F# code which will indeed generate .tail?

    Read the article

  • Randomly choose value between 1 and 10 with equal number of instances

    - by user1723765
    I would like to generate 2000 random numbers between 1 and 10 such that for each random number I have the same number of instances. In this case 200 for each number. What should be random is the order in which it is generated. I have the following problem: I have an array with 2000 entries but not each with unique values, for example it starts like this: 11112233333333344445667777777777 and consists of 2000 entries. I would like to generate random numbers and assign each UNIQUE value a separate random number but have an entry for each value So my intended result would look like this: original array: 11112233333333344445667777777777 random numbers: 33334466666666699991778888888888

    Read the article

  • Create set of random JPGs

    - by Kylar
    Here's the scenario, I want to create a set of random, small jpg's - anywhere between 50 bytes and 8k in size - the actual visual content of the jpeg is irrelevant as long as they're valid. I need to generate a thousand or so, and they all have to be unique - even if they're only different by a single pixel. Can I just write a jpeg header/footer and some random bytes in there? I'm not able to use existing photos or sets of photos from the web. The second issue is that the set of images has to be different for each run of the program. I'd prefer to do this in python, as the wrapping scripts are in Python. I've looked for python code to generate jpg's from scratch, and didn't find anything, so pointers to libraries are just as good.

    Read the article

  • How do you pass objects between View Controllers in Objective-C?

    - by editor
    I've been trudging through some code for two days trying to figure out why I couldn't fetch a global NSMutableArray variable I declared in the .h and implemented in .m and set in a the viewDidLoad function. It finally dawned on me: there's no such thing as a global variable in Objective-C, at least not in the PHP sense I've come to know. I didn't ever really read the XCode error warnings, but there it was, even if not quite plain English: "Instance variable 'blah' accessed in class method." My question: What am I supposed to do now? I've got two View Controllers that need to access a central NSMutableDictionary I generate from a JSON file via URL. It's basically an extended menu for all my Table View drill downs, and I'd like to have couple other "global" (non-static) variables. Do I have to grab the JSON each time I want to generate this NSMutableDictionary or is there some way to set it once and access it from various classes via #import? Do I have to write data to a file, or is there another way people usually do this?

    Read the article

  • Is it a good idea to include a large text variable in compiled code?

    - by gladman
    I am writing a program that produces a formatted file for the user, but it's not only producing the formatted file, it does more. I want to distribute a single binary to the end user and when the user runs the program, it will generate the xml file for the user with appropriate data. In order to achieve this, I want to give the file contents to a char array variable that is compiled in code. When the user runs the program, I will write out the char file to generate an xml file for the user. char* buffers = "a xml format file contents, \ this represent many block text \ from a file,..."; I have two questions. Q1. Do you have any other ideas for how to compile my file contents into binary, i.e, distribute as one binary file. Q2. Is this even a good idea as I described above?

    Read the article

  • Macro C++ Issues __VA_ARGS__

    - by CodeLizard
    What (if any) are some potential problems with a C++ macro usage like this? Would an inline function be a more appropriate solution? #define EVENT_INFO(_format_, ...) CMyEvent::Generate(__FILE__, __LINE__, CMyEvent::EVT_HIGH, _format_, __VA_ARGS__) void CMyEvent::Generate( const char* file, // filename int line, // line number CMyEvent::LEVEL level, // severity level const char *format, // format of the msg / data ...) // variable arguments { // Get a message from the pool CMyEvent* p_msg = GetMessageFromPool(); if(p_msg != NULL) { va_list arguments; // points to each unnamed argument va_start(arguments, format); // Fill the object with strings and data. p_msg->Fill(file, line, level, 0, format, arguments); va_end(arguments); } }

    Read the article

  • How to apply Seq map function?

    - by netmatrix01
    Hello All, I been recently playing with F# . I was wondering instead of using a for loop to generate a sequence to element which are multiplied with every other element in the list how can I use a Seq map function or something similar to generate something like below. So for e.g. I have a list [1..10] I would like to apply a fun which generates a result something like [(1*1); (1*2);(1*3); (1*4); (1*5)......(2*1);(2*2);(2*3).....(3*1);(3*2)...] How can i achieve this ?. Many thanks for all you help.

    Read the article

  • What code commenting style should I use for both C++ and C# ?

    - by ereOn
    Hi, I was recently hired in a company. They have a lot of C++ and C# code source files and almost one different commenting style per file. (Well this is exagerated, but you have the picture) This morning, I show my new collegues how my personnal code was commented and how easy it was to generate a documentation from it (I use Doxygen). I somehow convinced them to use a coherent commenting style. I would suggest using Doxygen for C++ source files and native C# style commenting style for C# files. However, I'm not sure that's the best solution here. Are you aware of a commenting style that would work for both C++ and C# allowing us to generate a documentation from it ? If so, what software/tool should we use ? Having a solution that would work on Linux platforms as well would be even better. Thank you very much.

    Read the article

  • Good Form for Random Number Generator?

    - by JackCJR
    I wanted to get a few opinions on doing a random number generator. I read through a couple different ways to do it and this is the way that I created after doing a little reading. Is it acceptable form? Any foreseeable issues? function master(){ function generate(){ var pickFrom= "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; var j = pickFrom.length; var i = Math.floor(Math.random()*j+1); //Generates a random number var code = pickFrom.charAt(i) //Picks a number based on what number was picked. document.write(code) } x = 1; while(x){ generate(); x--; } } master();

    Read the article

  • How much does precomputation (matching a series of strings and their permutations with a set number

    - by nipun
    Consider a typical slots machine with n reels(say reel1: a,b,c,d,w1,d,b, ..etc). On play we generate a concatenated string of n objects (like for above, chars) We have a paytable which lists winning strings with payout amounts. The problem is a wild character (list of wilds: w1,w2) which can replace {w1:a,b,c},{w2:a} ..etc. Is it really worthwhile to have all possible winning strings permutations with the wilds precomputed and used or simply at the time of occurance, generate all combinations with the pattern in hand accordingly. I did'nt really see much difference initially, but now if I need to scale the machine to handle 11+ reels with a much higher concentration of wilds than previously, I need to figure out the exact approach for this particular bit. Any ideas will be really appreciated :)

    Read the article

  • Retrieve data like rework %, schedule and effort varience from Microsoft Project

    - by Ram
    Hi, I need to generate various metric from my MS project file for the period of one month. I need to generate following reports Schedule Variance Effort Variance Rework Percentage Wasted Efforts For rework percentage, I am using condition like the task.Start date should be greater than or equal to the start date and task.Finish date should be less than or equal to finish date. but I am concerned about the tasks those are starting before the start date and ending before the end date. In such situation I only need the rework % for the number of hrs spent during start and end and not for the hrs spent before start date. Same thing applies to the task which are starting before end date but ending after end date. Any pointer would be great help. Thanks

    Read the article

  • How to obtain a random sub-datatable from another data table

    - by developerit
    Introduction In this article, I’ll show how to get a random subset of data from a DataTable. This is useful when you already have queries that are filtered correctly but returns all the rows. Analysis I came across this situation when I wanted to display a random tag cloud. I already had the query to get the keywords ordered by number of clicks and I wanted to created a tag cloud. Tags that are the most popular should have more chance to get picked and should be displayed larger than less popular ones. Implementation In this code snippet, there is everything you need. ' Min size, in pixel for the tag Private Const MIN_FONT_SIZE As Integer = 9 ' Max size, in pixel for the tag Private Const MAX_FONT_SIZE As Integer = 14 ' Basic function that retreives Tags from a DataBase Public Shared Function GetTags() As MediasTagsDataTable ' Simple call to the TableAdapter, to get the Tags ordered by number of clicks Dim dt As MediasTagsDataTable = taMediasTags.GetDataValide ' If the query returned no result, return an empty DataTable If dt Is Nothing OrElse dt.Rows.Count < 1 Then Return New MediasTagsDataTable End If ' Set the font-size of the group of data ' We are dividing our results into sub set, according to their number of clicks ' Example: 10 results -> [0,2] will get font size 9, [3,5] will get font size 10, [6,8] wil get 11, ... ' This is the number of elements in one group Dim groupLenth As Integer = CType(Math.Floor(dt.Rows.Count / (MAX_FONT_SIZE - MIN_FONT_SIZE)), Integer) ' Counter of elements in the same group Dim counter As Integer = 0 ' Counter of groups Dim groupCounter As Integer = 0 ' Loop througt the list For Each row As MediasTagsRow In dt ' Set the font-size in a custom column row.c_FontSize = MIN_FONT_SIZE + groupCounter ' Increment the counter counter += 1 ' If the group counter is less than the counter If groupLenth <= counter Then ' Start a new group counter = 0 groupCounter += 1 End If Next ' Return the new DataTable with font-size Return dt End Function ' Function that generate the random sub set Public Shared Function GetRandomSampleTags(ByVal KeyCount As Integer) As MediasTagsDataTable ' Get the data Dim dt As MediasTagsDataTable = GetTags() ' Create a new DataTable that will contains the random set Dim rep As MediasTagsDataTable = New MediasTagsDataTable ' Count the number of row in the new DataTable Dim count As Integer = 0 ' Random number generator Dim rand As New Random() While count < KeyCount Randomize() ' Pick a random row Dim r As Integer = rand.Next(0, dt.Rows.Count - 1) Dim tmpRow As MediasTagsRow = dt(r) ' Import it into the new DataTable rep.ImportRow(tmpRow) ' Remove it from the old one, to be sure not to pick it again dt.Rows.RemoveAt(r) ' Increment the counter count += 1 End While ' Return the new sub set Return rep End Function Pro’s This method is good because it doesn’t require much work to get it work fast. It is a good concept when you are working with small tables, let says less than 100 records. Con’s If you have more than 100 records, out of memory exception may occur since we are coping and duplicating rows. I would consider using a stored procedure instead.

    Read the article

  • XAML2CPP 1.0.2.0

    - by Valter Minute
    A new updated release of everybody favourite XAML to CPP conversion tool (at least because it’s the only one available!). New features: - support for resource dictionaries (app.xaml if you use Blend to generate your XAML) Bugfixes: - the parameters for the mouseleftbuttondown and up events were incorrect As usual you can download the new release here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/XAML2CPP.zip Technorati Tags: XAML,Silverlight for Windows Embedded

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • Visio Forward Engineer Addin for Office 2010

    - by AlbertoFerrari
    Most of my database model are written with Visio. I don’t want to start a digression whether Visio is good or not to build a simple data model: Visio is enogh for my modeling needs and customers love its colours and the ability to open the model with Office when I need to discuss it with them. When I have finished modeling, I generate the database and everything works fine. Nevertheless, Microsoft seems not to like the forward engineer capabilities of Visio. The last release that supports forward...(read more)

    Read the article

  • SQL Server Management Data Warehouse - quick tour on setting health monitoring policies

    - by ssqa.net
    Profiler, Perfmon, DMVs & scripts are legendary tools for a DBA to monitor the SQL arena. In line with these tools SQL Server 2008 throws a powerful stream with policy based management (PBM) framework & management data warehouse (MDW) methods, which is a relational database that contains the data that is collected from a server that is a data collection target. This data is used to generate the reports for the System Data collection sets, and can also be used to create custom reports. .....(read more)

    Read the article

  • EF 4’s PluralizationService Class: A Singularly Impossible Plurality

    - by Ken Cox [MVP]
    Entity Framework’s new 4.0 designer does its best to generate correct plural and singular forms of object names. This magic is done through the PluralizationService Class found in the System.Data.Entity.Design.PluralizationServices namespace and in the System.Data.Entity.Design.dll assembly. [Before you ask… Yes, I’ll post my example page, the service, and the project source code as soon as my ISP makes ASP.NET 4 RTM available. Stay tuned.] Anyone who speaks English is brutally aware of the ridiculous...(read more)

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >