Search Results

Search found 6329 results on 254 pages for 'linq to csv'.

Page 159/254 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Python script, runs well, but not perfectly, debugging help.

    - by S1syphus
    What it does (sort of)... or is meant to, the script reads from a csv file that contains information on sound files and create a play list exactly 60 minutes long. An example csv, contains: their title, duration (in seconds), minium total time to be played (in minutes) An example is: Soundfoo,120,10 Soundbar,30,6 Sounddev,60,20 Soundrandom,15,8 The script works out the minimum instances of plays, take 'Soundfoo' for example, the length of each sample is 120 seconds and the minimum time to be played is 10 minutes, so basic maths 10*60/120 gives the number of instances the song is to be played, in this case 5. It is meant to take minimum number of instances and spread out equally from each other; so there will never be a period where for example Soundbar is played twice in a row. Then if the minium instances of each song has been used, and there is still time with in the 60 min, how is it possible to tell it to go back and fill the time by selecting each sound and including it till the 60 min is filled while remaining sparsely populated. Heres the issue(s)! The script fails to calculate the actual time require to play all the sounds in a file and the total time of the playlist, the thing is tho it doesn't get it wrong all the time maybe 3/5 times, even if I run it on the same csv file it will give me different answers. Here is the file I shall run the script on e for sake of ease to see the issue: Sound1,60,10 Sound2,60,10 Sound3,60,10 Sound4,60,10 Sound5,60,10 Sound6,60,10 I'll do it three times and post the results: 1 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 62 Total playtime in minutes: 62.0 2 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 71 Total playtime in minutes: 71.0 3 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 60 Total playtime in minutes: 60.0 Relevant Code: pastebin.com/demkBXk6 And finally... in context: http://pastebin.com/demkBXk6 If you made it down to here, thanks for staying and reading, kudos.

    Read the article

  • Java: Combine 2 List <String[]>

    - by battousai622
    I have two List of array string. I want to be able to create a New List (newList) by combining the 2 lists. But it must meet these 3 conditions: 1) Copy the contents of store_inventory into newList. 2) Then if the item names in store_inventory & new_acquisitions match, just add the two quantities together and change it in newList. 3) If new_acquisitions has a new item that does not exist in store_inventory, then add it to the newList. The titles for the CSV list are: Item Name, Quantity, Cost, Price. The List contains an string[] of item name, quantity, cost and price for each row. CSVReader from = new CSVReader(new FileReader("/test/new_acquisitions.csv")); List <String[]> acquisitions = from.readAll(); CSVReader to = new CSVReader(new FileReader("/test/store_inventory.csv")); List <String[]> inventory = to.readAll(); List <String[]> newList; Any code to get me started would be great! =] this is what i have so far... for (int i = 0; i < acquisitions.size(); i++) { temp1 = acquisitions.get(i); for (int j = 1; j < inventory.size(); j++) { temp2 = inventory.get(j); if (temp1[0].equals(temp2[0])) { //if match found... do something? //break out of loop } } //if new item found... do something? }

    Read the article

  • Take Current Snapshot of DB and send it to FTP in same PHP Scripts: Advice Needed

    - by Rachel
    Not sure if I can do it this way. I want to get current snapshot of the database and send it via FTP Server, both of this functionality should be implemented in PHP scripts. Here are the steps I am thinking on right now. In my php scripts(basically am extending an PDO into my Dao class and then preparing the query), $qry = SELECT * FROM MyTablename; $stmt = $this->prepare($qry); $stmt = $this->execute(); Now I will store $stmt in csv file using fputcsv or I will execute the sql command from the script itself and than try to store the result in the $file(csv file) note here that I do not have any csv file with me at this point to basically I will have to create one and let's say its $file, so then $file = fputcsv($stmt); or $file = exec("Select * from MyTablename"); Will this put all records in the file ? If yes, then I will use FTP Functionality to transfer file to the FTP Folder. I am not sure if this approach would work and also have concerns regarding the need of preparing the $qry Any suggestions or different approach advised would be highly appreciated. Thanks !!!

    Read the article

  • Am I using handlers in the wrong way?

    - by superexsl
    Hey, I've never used HTTP Handlers before, and I've got one working, but I'm not sure if I'm actually using it properly. I have generated a string which will be saved as a CSV file. When the user clicks a button, I want the download dialog box to open so that the user can save the file. What I have works, but I keep reading about modifying the web.config file and I haven't had to do that. My Handler: private string _data; private string _title = "temp"; public void AddData(string data) { _data = data; } public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/csv"; context.Response.AddHeader("content-disposition", "filename=" + _title + ".csv"); context.Response.Write(_data); context.Response.Flush(); context.Response.Close(); } And this is from the page that allows the user to download: (on button click) string dataToConvert = "MYCSVDATA...."; csvHandler handler = new csvHandler(); handler.AddData(dataToConvert); handler.ProcessRequest(this.Context); This works fine, but no examples I've seen ever instantiate the handler and always seem to modify the web.config. Am I doing something wrong? Thanks

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Object reference not set to an instance of an object.

    - by Lambo
    I have the following code - private static void convert() { string csv = File.ReadAllText("test.csv"); string year = "2008"; XDocument doc = ConvertCsvToXML(csv, new[] { "," }); doc.Save("update.xml"); XmlTextReader reader = new XmlTextReader("update.xml"); XmlDocument testDoc = new XmlDocument(); testDoc.Load(@"update.xml"); XDocument turnip = XDocument.Load("update.xml"); webservice.singleSummary[] test = new webservice.singleSummary[1]; webservice.FinanceFeed CallWebService = new webservice.FinanceFeed(); foreach(XElement el in turnip.Descendants("row")) { test[0].account = el.Descendants("var").Where(x => (string)x.Attribute("name") == "account").SingleOrDefault().Attribute("value").Value; test[0].actual = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "actual").SingleOrDefault().Attribute("value").Value); test[0].commitment = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "commitment").SingleOrDefault().Attribute("value").Value); test[0].costCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "costCentre").SingleOrDefault().Attribute("value").Value; test[0].internalCostCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "internalCostCentre").SingleOrDefault().Attribute("value").Value; MessageBox.Show(test[0].account, "Account"); MessageBox.Show(System.Convert.ToString(test[0].actual), "Actual"); MessageBox.Show(System.Convert.ToString(test[0].commitment), "Commitment"); MessageBox.Show(test[0].costCentre, "Cost Centre"); MessageBox.Show(test[0].internalCostCentre, "Internal Cost Centre"); CallWebService.updateFeedStatus(test, year); } It is coming up with the error of - NullReferenceException was unhandled, saying that the object reference not set to an instance of an object. The error occurs on the first line test[0].account. How can I get past this?

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • Using a function found in a different file in a loop

    - by Anders
    This question is related to BuddyPress, and a follow-up question from this question I have a .csv-file with 790 rows and 3 columns where the first column is the group name, second is the group description and last (third) the slug. As far as I've been told I can use this code: <?php $groups = array(); if (($handle = fopen("groupData.csv", "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $group = array('group_id' = 'SOME ID', 'name' = $data[0], 'description' = $data[1], 'slug' = groups_check_slug(sanitize_title(esc_attr($data[2]))), 'date_created' = gmdate( "Y-m-d H:i:s" ), 'status' = 'public' ); $groups[] = $group; } fclose($handle); } foreach ($groups as $group) { groups_create_group($group); } With http://www.nomorepasting.com/getpaste.php?pasteid=35217 which is called bp-groups.php. The thing is that I can't make it work. I've created a new file with the code written above called groupgenerator.php uploaded the .csv file to the same folder and opened groupgenerator.php in my browser. But, i get this error: Fatal error: Call to undefined function groups_check_slug() in What am I doing wrong?

    Read the article

  • convert jruby 1.8 string to windows encoding?

    - by Arne
    Hey, I want to export some data from my jruby on rails webapp to excel, so I create a csv string and send it as a download to the client using send_data(text, :filename => "file.csv", :type => "text/csv; charset=CP1252", :encoding => "CP1252") The file seems to be in UTF-8 which Excel cannot read correctly. I googled the problem and found that iconv can convert encodings. I try to do that with: ic = Iconv.new('CP1252', 'UTF-8') text = ic.iconv(text) but when I send the converted text it does not make any difference. It is still UTF-8 and Excel cannot read the special characters. there are several solutions using iconv, so this seems to work for others. When I convert the file on the linux shell manually with iconv it works. What am I doing wrong? Is there a better way? Im using: - jruby 1.3.1 (ruby 1.8.6p287) (2009-06-15 2fd6c3d) (Java HotSpot(TM) Client VM 1.6.0_19) [i386-java] - Debian Lenny - Glassfish app server - Iceweasel 3.0.6

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • PostgreSQL storing paths for reference in scripts

    - by Brian D.
    I'm trying to find the appropriate place to store a system path in PostgreSQL. What I'm trying to do is load values into a table using the COPY command. However, since I will be referring to the same file path regularly I want to store that path in one place. I've tried creating a function to return the appropriate path, but I get a syntax error when I call the function in the COPY command. I'm not sure if this is the right way to go about it, but I'll post my code anyway. COPY command: COPY employee_scheduler.countries (code, name) FROM get_csv_path('countries.csv') WITH CSV; Function Definition: CREATE OR REPLACE FUNCTION employee_scheduler.get_csv_path(IN file_name VARCHAR(50)) RETURNS VARCHAR(250) AS $$ DECLARE path VARCHAR(200) := E'C:\\Brian\\Work\\employee_scheduler\\database\\csv\\'; file_path VARCHAR(250) := ''; BEGIN file_path := path || file_name; RETURN file_path; END; $$ LANGUAGE plpgsql; If anyone has a different idea on how to accomplish this I'm open to suggestions. Thanks for any help!

    Read the article

  • Reflection and Generics get value of property

    - by GigaPr
    Hi i am trying to implement a generic method which allows to output csv file public static void WriteToCsv<T>(List<T> list) where T : new() { const string attachment = "attachment; filename=PersonList.csv"; HttpContext.Current.Response.Clear(); HttpContext.Current.Response.ClearHeaders(); HttpContext.Current.Response.ClearContent(); HttpContext.Current.Response.AddHeader("content-disposition", attachment); HttpContext.Current.Response.ContentType = "text/csv"; HttpContext.Current.Response.AddHeader("Pragma", "public"); bool isFirstRow = true; foreach (T item in list) { //Get public properties PropertyInfo[] propertyInfo = item.GetType().GetProperties(); while (isFirstRow) { WriteColumnName(propertyInfo); isFirstRow = false; } Type type = typeof (T); StringBuilder stringBuilder = new StringBuilder(); foreach (PropertyInfo info in propertyInfo) { //string value ???? I am trying to get the value of the property info for the item object } HttpContext.Current.Response.Write(stringBuilder.ToString()); HttpContext.Current.Response.Write(Environment.NewLine); } HttpContext.Current.Response.End(); } but I am not able to get the value of the object's property Any suggestion? Thanks

    Read the article

  • perl multithreading issue for autoincrement

    - by user3446683
    I'm writing a multi threaded perl script and storing the output in a csv file. I'm trying to insert a field called sl.no. in the csv file for each row entered but as I'm using threads, the sl. no. overlaps in most. Below is an idea of my code snippet. for ( my $count = 1 ; $count <= 10 ; $count++ ) { my $t = threads->new( \&sub1, $count ); push( @threads, $t ); } foreach (@threads) { my $num = $_->join; } sub sub1 { my $num = shift; my $start = '...'; #distributing data based on an internal logic my $end = '...'; #distributing data based on an internal logic my $next; for ( my $x = $start ; $x <= $end ; $x++ ) { my $count = $x + 1; #part of code from which I get @data which has name and age my $j = 0; if ( $x != 0 ) { $count = $next; } foreach (@data) { #j is required here for some extra code flock( OUTPUT, LOCK_EX ); print OUTPUT $count . "," . $name . "," . $age . "\n"; flock( OUTPUT, LOCK_UN ); $j++; $count++; } $next = $count; } return $num; } I need the count to be incremented which is the serial number for the rows that would be inserted in the csv file. Any help would be appreciated.

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • How can I read a continuously updating log file in Perl?

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format: Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in CSV format online, i.e as soon the log file updated the CSV should also be updated. What I have tried to read the output and make the header is: #!/usr/bin/perl -w use strict; use File::Tail; my $head=["Time"]; my $pos={}; my $last_pos=0; my $current_event=[]; my $events=[]; my $file = shift; $file = File::Tail->new($file); while(defined($_=$file->read)) { next if $_ =~ some filters; my ($time,$interface,$count,$eve,$value) = split /[,\n]/, $_; my $key = $interface.".".$eve; if (not defined $pos->{$eve_key}) { $last_pos+=1; $pos->{$eve_key}=$last_pos; push @$head,$eve; } print join(",", @$head) . "\n"; } Is there any way to do this using Perl?

    Read the article

  • Pick up relevant information from a string using regular expression C#3.0

    - by Newbie
    Hi, I have a situation. I have been given some file name which can be like <filename>YYYYMMDD<fileextension> some valid file names that will satisfy the above pattern are as under xxx20100326.xls, xxx2v20100326.csv, x_20100326.xls, xy2z_abc_20100326_xyz.csv, abc.xyz.20100326.doc, ab2.v.20100326.doc, abc.v.20100326_xyz.xls In what ever be the above defined case, I need to pick up the dates only. So for all the cases, the output will be 20100326. I am trying to achieve the same but no luck. Here is what I have done so far string testdata = "x2v20100326.csv"; string strYYYY = @"\d{4}"; string strMM = @"(1[0-2]|0[1-9])"; string strDD = @"(3[0-1]|[1-2][0-9]|0[1-9])"; string regExPattern = @"\A" + strYYYY + strMM + strDD + @"\Z"; Regex regex = new Regex(regExPattern); Match match = regex.Match(testdata); if (match.Success) { string result = match.Groups[0].Value; } I am using c#3.0 and dotnet framework 3.5 Please help. It is very urgent Thanks in advance.

    Read the article

  • Can someone answer this for me?

    - by Dcurvez
    okay I am totally stuck. I have been getting some help off and on throughout this project and am anxious to get this problem solved so I can continue on with the rest of this project. I have a gridview that is set to save to a file, and has the option to import into excel. I keep getting an error of this: Invalid cast exception was unhandled. At least one element in the source array could not be cast down to the destination array type. Can anyone tell me in layman easy to understand what this error is speaking of? This is the code I am trying to use: Dim fileName As String = "" Dim dlgSave As New SaveFileDialog dlgSave.Filter = "Text files (*.txt)|*.txt|CSV Files (*.csv)|*.csv" dlgSave.AddExtension = True dlgSave.DefaultExt = "txt" If dlgSave.ShowDialog = Windows.Forms.DialogResult.OK Then fileName = dlgSave.FileName SaveToFile(fileName) End If End Sub Private Sub SaveToFile(ByVal fileName As String) If DataGridView1.RowCount > 0 AndAlso DataGridView1.Rows(0).Cells(0) IsNot Nothing Then Dim stream As New System.IO.FileStream(fileName, IO.FileMode.Append, IO.FileAccess.Write) Dim sw As New System.IO.StreamWriter(stream) For Each row As DataGridViewRow In DataGridView1.Rows Dim arrLine(9) As String Dim line As String **row.Cells.CopyTo(arrLine, 0)** line = arrLine(0) line &= ";" & arrLine(1) line &= ";" & arrLine(2) line &= ";" & arrLine(3) line &= ";" & arrLine(4) line &= ";" & arrLine(5) line &= ";" & arrLine(6) line &= ";" & arrLine(7) line &= ";" & arrLine(8) sw.WriteLine(line) Next sw.Flush() sw.Close() End If I bolded the line where it shows in debug, and I really dont see what all the fuss is about LOL

    Read the article

  • Updating Cells in a DataTable

    - by Maxim Z.
    I'm writing a small app to do a little processing on some cells in a CSV file I have. I've figured out how to read and write CSV files with a library I found online, but I'm having trouble: the library parses CSV files into a DataTable, but, when I try to change a cell of the table, it isn't saving the change in the table! Below is the code in question. I've separated the process into multiple variables and renamed some of the things to make it easier to debug for this question. Code Inside the loop: string debug1 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString(); string debug2 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString().Trim(); string debug3 = readIn.Rows[i].ItemArray[numColumnToCopyFrom].ToString().Trim(); string towrite = debug2 + ", " + debug3; readIn.Rows[i].ItemArray[numColumnToCopyTo] = (object)towrite; After the loop: readIn.AcceptChanges(); When I debug my code, I see that towrite is being formed correctly and everything's OK, except that the row isn't updated: why isn't it working? I have a feeling that I'm making a simple mistake here: the last time I worked with DataTables (quite a long time ago), I had similar problems. If you're wondering why I'm adding another comma in towrite, it's because I'm combining a street address field with a zip code field - I hope that's not messing anything up. My code is kind of messy, as I'm only trying to edit one file to make a small fix, so sorry.

    Read the article

  • RIA Services Filter descriptor

    - by Mohit
    I have a Filterdescriptor as shown below. The propertypath is of type 'char?' I get following InvalidOperationException when I filter by entering a value Y InnerException {System.InvalidOperationException: A FilterDescriptor with its PropertyPath equal to 'Valid' cannot be evaluated. --- System.ArgumentException: Operator 'StartsWith' incompatible with operand types 'Char?' and 'Char?' --- System.ArgumentNullException: Value cannot be null. Parameter name: method at System.Linq.Expressions.Expression.ValidateCallArgs(Expression instance, MethodInfo method, ReadOnlyCollection1& arguments) at System.Linq.Expressions.Expression.Call(Expression instance, MethodInfo method, IEnumerable1 arguments) at System.Linq.Expressions.Expression.Call(Expression instance, MethodInfo method, Expression[] arguments) at System.Windows.Controls.LinqHelper.GenerateMethodCall(String methodName, Expression left, Expression right) at System.Windows.Controls.LinqHelper.GenerateStartsWith(Expression left, Expression right) at System.Windows.Controls.LinqHelper.BuildFilterExpression(Expression propertyExpression, FilterOperator filterOperator, Expression valueExpression, Boolean isCaseSensitive, Expression& filterExpression) --- End of inner exception stack trace --- --- End of inner exception stack trace ---} System.Exception {System.InvalidOperationException}

    Read the article

  • Nhibernate equivalent of LinqToEntitiesDomainService in RIA

    - by VexXtreme
    Hi, When using Entity Framework with RIA domain services, domain services are inherited from LinqToEntitiesDomainService, which, I suppose, allows you to make linq queries on a low level (client-side) which propagate into ORM; meaning that all queries are performed on the database and only relevant results are retrieved to the server and thus the client. Example: var query = context.GetCustomersQuery().Where(x => x.Age > 50); Right now we have a domain service which inherits from DomainService, and retrieves data through NHibernate session as in: virtual public IQueryable<Customer> GetCustomers() { return sessionManager.Session.Linq<Customer>(); } The problem with this approach is that it's impossible to make specific queries without retrieving entire tables to the server (or client) and filtering them there. Is there a way to make linq querying work with NHibernate over RIA like it works with EF? If not, we're willing to switch to EF because of this, because performance impact would be just too severe. Thanks

    Read the article

  • Using a Generic Repository pattern with fluent nHibernate

    - by alex
    I'm currently developing a medium sized application, which will access 2 or more SQL databases, on different sites etc... I am considering using something similar to this: http://mikehadlow.blogspot.com/2008/03/using-irepository-pattern-with-linq-to.html However, I want to use fluent nHibernate, in place of Linq-to-SQL (and of course nHibernate.Linq) Is this viable? How would I go about configuring this? Where would my mapping definitions go etc...? This application will eventually have many facets - from a WebUI, WCF Library and Windows applications / services. Also, for example on a "product" table, would I create a "ProductManager" class, that has methods like: GetProduct, GetAllProducts etc... Any pointers are greatly received.

    Read the article

  • ASP.NET MVC2 Data Access Layer

    - by Paul
    For a small/medium sized project I'm trying to figure out what is the 'ideal' way to have a domain layer and data access layer. My opinions on coupling tend to be more towards the view that the domain models should not be tightly coupled with the database layer, in other words the data access layer shouldn't actually know anything about the domain objects. I've been looking at Linq-to-sql and it wants to use its own models that it creates, and so it ends up VERY tightly coupled. Whilst I love the way you use linq-to-sql in code I really don't like the way it wants to make its own domain objects. What are some alternatives that I should consider? I tried use NHibernate but I did not like the way I had to use to query and get different objects. I honestly love the syntax and way you use linq, I just don't want it to be so tightly coupled to domain objects.

    Read the article

  • Ninject Given Path's format is not supported

    - by David Osborn
    The Ninject initialization works fine when i run my application directly from VS2010, but if I deploy the application to our custom "plugin" environment I get this error when I run the app and it tries to initialize Ninject. Error during initialization The given path's format is not supported. ERROR : The given path's format is not supported. Type : NotSupportedException Location: System.String CanonicalizePath(System.String, Boolean) Stack Trace: at System.Security.Util.StringExpressionSet.CanonicalizePath(String path, Boolean needFullPath) at System.Security.Util.StringExpressionSet.CreateListFromExpressions(String[] str, Boolean needFullPath) at System.Security.Permissions.FileIOPermission.AddPathList(FileIOPermissionAccess access, AccessControlActions control, String[] pathListOrig, Boolean checkForDuplicates, Boolean needFullPath, Boolean copyPathList) at System.Security.Permissions.FileIOPermission..ctor(FileIOPermissionAccess access, String[] pathList, Boolean checkForDuplicates, Boolean needFullPath) at System.IO.Path.GetFullPath(String path) at Ninject.Modules.ModuleLoader.NormalizePath(String path) at Ninject.Modules.ModuleLoader.GetFilesMatchingPattern(String pattern) at Ninject.Modules.ModuleLoader.b_0(String pattern) at System.Linq.Enumerable.d_142.MoveNext() at System.Linq.Lookup2.Create[TSource](IEnumerable1 source, Func2 keySelector, Func2 elementSelector, IEqualityComparer1 comparer) at System.Linq.GroupedEnumerable3.GetEnumerator() at Ninject.Modules.ModuleLoader.LoadModules(IEnumerable1 patterns) at Ninject.KernelBase.Load(IEnumerable`1 filePatterns) at Ninject.KernelBase..ctor(IComponentContainer components, INinjectSettings settings, INinjectModule[] modules) at Ninject.KernelBase..ctor(INinjectModule[] modules) at MyApp.Ioc.ResolveType.Initialize() at MyApp.Program.Run()

    Read the article

  • In OpenRasta is it possible to Pattern match multiple key/value pairs?

    - by Scott Littlewood
    Is it possible in OpenRasta to have a Uri pattern that allows for an array of values of the same key to be submitted and mapped to a handler method accepting an array of the query parameters. Example: Return all the contacts named Dave Smith from a collection. HTTP GET /contacts?filterBy=first&filterValue=Dave&filterBy=last&filterValue=Smith With a configuration of: What syntax would be best for the Uri string pattern matching? (Suggestions welcome) ResourceSpace.Has.ResourcesOfType<List<ContactResource>>() .AtUri("/contacts") .And.AtUri("/contacts?filterBy[]={filterBy}[]&filterValue[]={fv}[]") // Option 1 .And.AtUri("/contacts?filterBy={filterBy}[]&fv={fv}[]") // Option 2 Would map to a Handler method of: public object Get(params Filter[] filters) { /* create a Linq Expression based on the filters using dynamic linq query the repository using the Linq */ return Query.All<Contact>().Where(c => c.First == "Dave" && c.Last == "Smith").ToResource() } where Filter is defined by public class Filter { public string FilterBy { get; set; } public string FilterValue { get; set; } }

    Read the article

  • Is testability alone justification for dependency injection?

    - by fearofawhackplanet
    The advantages of DI, as far as I am aware, are: Reduced Dependencies More Reusable Code More Testable Code More Readable Code Say I have a repository, OrderRepository, which acts as a repository for an Order object generated through a Linq to Sql dbml. I can't make my orders repository generic as it performs mapping between the Linq Order entity and my own Order POCO domain class. Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, parameter passing of the DataContext can't really be said to make the code reuseable or reduce dependencies in any meaningful way. It also makes the code harder to read, as to instantiate the repository I now need to write new OrdersRepository(new MyLinqDataContext()) which additionally is contrary to the main purpose of the repository, that being to abstract/hide the existence of the DataContext from consuming code. So in general I think this would be a pretty horrible design, but it would give the benefit of facilitating unit testing. Is this enough justification? Or is there a third way? I'd be very interested in hearing opinions.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >