Search Results

Search found 6329 results on 254 pages for 'linq to csv'.

Page 149/254 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Can a class inherit from LambdaExpression in .NET? Or is this not recommended?

    - by d.
    Consider the following code (C# 4.0): public class Foo : LambdaExpression { } This throws the following design-time error: Foo does not implement inherited abstract member System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) There's absolutely no problem with public class Foo : Expression { } but, out of curiosity and for the sake of learning, I've searched in Google System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) and guess what: zero results returned (when was the last time you saw that?). Needless to say, I haven't found any documentation on this method anywhere else. As I said, one can easily inherit from Expression; on the other hand LambdaExpression, while not marked as sealed (Expression<TDelegate> inherits from it), seems to be designed to prevent inheriting from it. Is this actually the case? Does anyone out there know what this method is about? EDIT (1): More info based on the first answers - If you try to implement Accept, the editor (C# 2010 Express) automatically gives you the following stub: protected override Expression Accept(System.Linq.Expressions.ExpressionVisitor visitor) { return base.Accept(visitor); } But you still get the same error. If you try to use a parameter of type StackSpiller directly, the compiler throws a different error: System.Linq.Expressions.Compiler.StackSpiller is inaccessible due to its protection level. EDIT (2): Based on other answers, inheriting from LambdaExpression is not possible so the question as to whether or not it is recommended becomes irrelevant. I wonder if, in cases like this, the error message should be Foo cannot implement inherited abstract member System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) because [reasons go here]; the current error message (as some answers prove) seems to tell me that all I need to do is implement Accept (which I can't do).

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • Java 1.4 singleton containing a mutable field

    - by Philippe
    Hi, I'm working on a legacy Java 1.4 project, and I have a factory that instantiates a csv file parser as a singleton. In my csv file parser, however, I have a HashSet that will store objects created from each line of my CSV file. All that will be used by a web application, and users will be uploading CSV files, possibly concurrently. Now my question is : what is the best way to prevent my list of objects to be modified by 2 users ? So far, I'm doing the following : final class MyParser { private File csvFile = null; private static Set myObjects = Collections.synchronizedSet(new HashSet); public synchronized void setFile(File file) { this.csvFile = file; } public void parse() FileReader fr = null; try { fr = new FileReader(csvFile); synchronized(myObjects) { myObjects.clear(); while(...) { // foreach line of my CSV, create a "MyObject" myObjects.add(new MyObject(...)); } } } catch (Exception e) { //... } } } Should I leave the lock only on the myObjects Set, or should I declare the whole parse() method as synchronized ? Also, how should I synchronize - both - the setting of the csvFile and the parsing ? I feel like my actual design is broken because threads could modify the csv file several times while a possibly long parse process is running. I hope I'm being clear enough, because myself am a bit confused on those multi-synchronization issues. Thanks ;-)

    Read the article

  • How to write this snippet in Python?

    - by morpheous
    I am learning Python (I have a C/C++ background). I need to write something practical in Python though, whilst learning. I have the following pseudocode (my first attempt at writing a Python script, since reading about Python yesterday). Hopefully, the snippet details the logic of what I want to do. BTW I am using python 2.6 on Ubuntu Karmic. Assume the script is invoked as: script_name.py directory_path import csv, sys, os, glob # Can I declare that the function accepts a dictionary as first arg? def getItemValue(item, key, defval) return !item.haskey(key) ? defval : item[key] dirname = sys.argv[1] # declare some default values here weight, is_male, default_city_id = 100, true, 1 # fetch some data from a database table into a nested dictionary, indexed by a string curr_dict = load_dict_from_db('foo') #iterate through all the files matching *.csv in the specified folder for infile in glob.glob( os.path.join(dirname, '*.csv') ): #get the file name (without the '.csv' extension) code = infile[0:-4] # open file, and iterate through the rows of the current file (a CSV file) f = open(infile, 'rt') try: reader = csv.reader(f) for row in reader: #lookup the id for the code in the dictionary id = curr_dict[code]['id'] name = row['name'] address1 = row['address1'] address2 = row['address2'] city_id = getItemValue(row, 'city_id', default_city_id) # insert row to database table finally: f.close() I have the following questions: Is the code written in a Pythonic enough way (is there a better way of implementing it)? Given a table with a schema like shown below, how may I write a Python function that fetches data from the table and returns is in a dictionary indexed by string (name). How can I insert the row data into the table (actually I would like to use a transaction if possible, and commit just before the file is closed) Table schema: create table demo (id int, name varchar(32), weight float, city_id int); BTW, my backend database is postgreSQL

    Read the article

  • How can I extend a LINQ-to-SQL class without having to make changes every time the code is generated

    - by csharpnoob
    Hi, Update from comment: I need to extend linq-to-sql classes by own parameters and dont want to touch any generated classes. Any better suggestes are welcome. But I also don't want to do all attributes assignments all time again if the linq-to-sql classes are changing. so if vstudio generates new attribute to a class i have my own extended attributes kept separate, and the new innerited from the class itself Original question: i'm not sure if it's possible. I have a class car and a class mycar extended from class car. Class mycar has also a string list. Only difference. How can i cast now any car object to a mycar object without assigning all attributes each by hand. Like: Car car = new Car(); MyCar mcar = (MyCar) car; or MyCar mcar = new MyCar(car); or however i can extend car with own variables and don't have to do always Car car = new Car(); MyCar mcar = new MyCar(); mcar.name = car.name; mcar.xyz = car.xyz; ... Thanks.

    Read the article

  • Python program to search for specific strings in hash values (coding help)

    - by Diego
    Trying to write a code that searches hash values for specific string's (input by user) and returns the hash if searchquery is present in that line. Doing this to kind of just learn python a bit more, but it could be a real world application used by an HR department to search a .csv resume database for specific words in each resume. I'd like this program to look through a .csv file that has three entries per line (id#;applicant name;resume text) I set it up so that it creates a hash, then created a string for the resume text hash entry, and am trying to use the .find() function to return the entire hash for each instance. What i'd like is if the word "gpa" is used as a search query and it is found in s['resumetext'] for three applicants(rows in .csv file), it prints the id, name, and resume for every row that has it.(All three applicants) As it is right now, my program prints the first row in the .csv file(print resume['id'], resume['name'], resume['resumetext']) no matter what the searchquery is, whether it's in the resumetext or not. lastly, are there better ways to doing this, by searching word documents, pdf's and .txt files in a folder for specific words using python (i've just started reading about the re module and am wondering if this may be the route, rather than putting everything in a .csv file.) def find_details(id2find): resumes_f=open("resume_data.csv") for each_line in resumes_f: s={} (s['id'], s['name'], s['resumetext']) = each_line.split(";") resumetext = str(s['resumetext']) if resumetext.find(id2find): return(s) else: print "No data matches your search query. Please try again" searchquery = raw_input("please enter your search term") resume = find_details(searchquery) if resume: print resume['id'], resume['name'], resume['resumetext']

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • How to lazy load a data structure (python)

    - by Anton Geraschenko
    I have some way of building a data structure (out of some file contents, say): def loadfile(FILE): return # some data structure created from the contents of FILE So I can do things like puppies = loadfile("puppies.csv") # wait for loadfile to work kitties = loadfile("kitties.csv") # wait some more print len(puppies) print puppies[32] In the above example, I wasted a bunch of time actually reading kitties.csv and creating a data structure that I never used. I'd like to avoid that waste without constantly checking if not kitties whenever I want to do something. I'd like to be able to do puppies = lazyload("puppies.csv") # instant kitties = lazyload("kitties.csv") # instant print len(puppies) # wait for loadfile print puppies[32] So if I don't ever try to do anything with kitties, loadfile("kitties.csv") never gets called. Is there some standard way to do this? After playing around with it for a bit, I produced the following solution, which appears to work correctly and is quite brief. Are there some alternatives? Are there drawbacks to using this approach that I should keep in mind? class lazyload: def __init__(self,FILE): self.FILE = FILE self.F = None def __getattr__(self,name): if not self.F: print "loading %s" % self.FILE self.F = loadfile(self.FILE) return object.__getattribute__(self.F, name) What might be even better is if something like this worked: class lazyload: def __init__(self,FILE): self.FILE = FILE def __getattr__(self,name): self = loadfile(self.FILE) # this never gets called again # since self is no longer a # lazyload instance return object.__getattribute__(self, name) But this doesn't work because self is local. It actually ends up calling loadfile every time you do anything.

    Read the article

  • Efficient and accurate way to compact and compare Python lists?

    - by daveslab
    Hi folks, I'm trying to a somewhat sophisticated diff between individual rows in two CSV files. I need to ensure that a row from one file does not appear in the other file, but I am given no guarantee of the order of the rows in either file. As a starting point, I've been trying to compare the hashes of the string representations of the rows (i.e. Python lists). For example: import csv hashes = [] for row in csv.reader(open('old.csv','rb')): hashes.append( hash(str(row)) ) for row in csv.reader(open('new.csv','rb')): if hash(str(row)) not in hashes: print 'Not found' But this is failing miserably. I am constrained by artificially imposed memory limits that I cannot change, and thusly I went with the hashes instead of storing and comparing the lists directly. Some of the files I am comparing can be hundreds of megabytes in size. Any ideas for a way to accurately compress Python lists so that they can be compared in terms of simple equality to other lists? I.e. a hashing system that actually works? Bonus points: why didn't the above method work?

    Read the article

  • Java giving incorrect year values

    - by whistler
    Something very, very strange is occurring in my program, and I'm wondering if anyone out there has seen this occur before. And, if so, how to fix it. Basically, I am parsing an csv file...no problem there. One column contains a date and I am taking it in as a String and changing to a Date object. Again, no problem there. The code is as follows: SimpleDateFormat dateFormat = new SimpleDateFormat("MM/dd/yy hh:mm"); Date initialDate = new Date(); try { initialDate = dateFormat.parse(rows.get(0)[8]); System.out.println(initialDate); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } Of course, I'm parsing other columns as well (and those are working fine). So, when I run my program for a small csv file (2.8 MB), the dates come out (i.e. are parsed) perfectly. However, when I run the program for a large csv file (25 MB), the dates are a hot mess. For example, take a look at the year values I am getting (the following is just a tiny portion of the println output from the code above): 1000264 at Sun Nov 05 15:30:00 EST 2186 1000320 at Sat Mar 04 17:30:00 EST 2169 1000347 at Sat Apr 01 09:45:00 EDT 2169 1000413 at Tue Jul 09 13:00:00 EDT 2182 1000638 at Fri Dec 11 13:45:00 EST 2167 1000667 at Wed Dec 10 10:00:00 EST 2188 1000690 at Mon Jan 02 13:00:00 EST 2169 1000843 at Thu Feb 11 13:30:00 EST 2196 In actuality, the years are in the realm of 1990-2006 or so. Again, this does not happen with the small csv file. Does anyone know what's going on here and how I can fix it? I need to process the large csv file (the small one was just for testing purposes). By request, here are the actual dates in the csv file and after that the value given by the code above: 5/20/03 15:30 5/20/03 15:30 8/30/04 9:00 8/30/04 9:00 12/20/04 10:30 12/20/04 10:30 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Thu Dec 08 09:00:00 EST 2196 Tue Dec 12 10:30:00 EST 2186 Tue Dec 12 10:30:00 EST 2186

    Read the article

  • Why is the page still caching even after the no-cache headers have been sent?

    - by Matthew Grasinger
    I've done a ton of research on this and have asked many people with help and still no success. Here are the details... I'm involved in developing a website that pulls data from various data files, combines them in a temp .csv file, and then is graphed using a popular graphing library: dygraphs. The bulk of the website is written in PHP. The parameters that determine the data that is graphed are stored in the users session, the .csv is named after the users session and available for download, and then the .csv file is written in a script that passes it to the dygraphs object. And we've found, even with the no-cache headers sent: header("Cache-Control: no-cache, must-revalidate"); header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); Many users experience in the middle of a session, (if enough different graphs are generated) the page displaying an older, static rendering of the page (data they had graphed earlier in the session) as if it were cached and loaded instead of getting a new request. It only gets weirder though: I've checked using developer tools in both Firefox and Chrome and both browsers are receiving the no-cache headers just fine; Even when the problem occurs if you view the page source, the source is the correct content (a table/legend is also dynamically created using php, the source shows the correct table, but what is rendered is older content); the page begins to render correctly until the graph is about to be display, and then shows the older content; the older content displays as if it were a completely static overlay--the cached graph does not have the same dynamic features (roll over data point display, zoom and pan, etc.) And it is as if the correct page were somewhere beneath it (the download button for the csv file moves depending on how large the table is. The older, static page does nothing if you click the download .csv button, but if you can manage to find the one in the page beneath it you can click and still download the .csv. The data in the .csv is correct) It is one of the strangest things I've seen in development thus far. Some other relevant facts are that all the problems I've personally experience occurred while I was using Chrome. Non of these symptoms have been reported by Firefox users. IE users have had the same problems (IE users are forced to use chrome frame). I'm at my wits end at this point. We've sent the php headers; we've tried setting the cache profile for php on IIS as "DisableCache" (or whatever); we've tried sending a random query string to the results page; we've tried all the appropriate meta tags--all with no success.

    Read the article

  • How to write linq query in Entity FrameWork but my table includes Foreignkey?

    - by programmerist
    i have a table it includes 3 foreignkey field like that: My Table: Kartlar ID (Pkey) RehberID (Fkey) KampanyaID (Fkey) BrimID (Fkey) Name Detail How can i write entity query with linq :? select * from Kartlar where RehberID=123 and KampanyaID=345 and BrimID=567 BUT please be carefull i can not see RehberID ,KampanyaID, BrimID in Entity they are Foreign Key. I should use Entity Key but How?

    Read the article

  • Why does Windows XP (during a rename operation) report file already exists when it doesn't?

    - by Hawk
    From the command-line: E:\menu\html\tom\val\.svn\tmp\text-base>ver Microsoft Windows [Version 5.2.3790] E:\menu\html\tom\val\.svn\tmp\text-base>dir Volume in drive E is DATA Volume Serial Number is F047-F44B Directory of E:\menu\html\tom\val\.svn\tmp\text-base 12/23/2010 04:36 PM <DIR> . 12/23/2010 04:36 PM <DIR> .. 12/23/2010 04:01 PM 0 wtf.com3.csv.svn-base 1 File(s) 0 bytes 2 Dir(s) 170,780,262,400 bytes free E:\menu\html\tom\val\.svn\tmp\text-base>rename wtf.com3.csv.svn-base com3.csv.svn-base A duplicate file name exists, or the file cannot be found. E:\menu\html\tom\val\.svn\tmp\text-base>dir Volume in drive E is DATA Volume Serial Number is F047-F44B Directory of E:\menu\html\tom\val\.svn\tmp\text-base 12/23/2010 04:36 PM <DIR> . 12/23/2010 04:36 PM <DIR> .. 12/23/2010 04:01 PM 0 wtf.com3.csv.svn-base 1 File(s) 0 bytes 2 Dir(s) 170,753,064,960 bytes free E:\menu\html\tom\val\.svn\tmp\text-base>` I don't know what to do about this, as there is no other file in this directory. Why does Windows XP report that there is already a file here named com3.csv.svn-base when there is clearly no other file here?

    Read the article

  • Broken characters in filenames only in some directories

    - by Kaivosukeltaja
    We have a web server running CentOS 5.8 that uses SVN for version control. When trying to switch to the latest revision, we got an error about the filenames of files in an upload directory: svn: Error converting entry in directory 'adm/emails/upload' to UTF-8 svn: Valid UTF-8 data (hex: 54 79) followed by invalid UTF-8 sequence (hex: f6 6b 69 72) Upon investigating, we noticed there were some files that had broken filenames: $ ls ~/public_html/adm/emails/upload/ Ty?el?m?trendit.csv Ty?kirja1.csv To get the update completed quickly, we simply mved the files into our home directory. Surprisingly, their filenames looked fine in their new location: $ ls ~/ Työelämätrendit.csv Työkirja1.csv After the update we moved them back to where they were and their filenames were broken again. What could cause this and how can we fix it? The system's locale is set to LANG=en_US.UTF-8.

    Read the article

  • Regarding Unix Move Command

    - by user38993
    I need to write an Unix Shell Script tran.sh that moves the csv input files from /exp/files folder to /exp/ready directory. The csv input files are written to /exp/files folder by an FTP server whose behavior I cannot trivially change. In tran.sh shell script I need to ensure before doing a move of that csv input file from /exp/files directory no longer any other process is writing to the file. How can I do it.

    Read the article

  • Powershell: Conditionally changing objects in the pipeline

    - by axk
    I'm converting a CSV to SQL inserts and there's a null-able text column which I need to quote in case it is not NULL. I would write something like the following for the conversion: Import-Csv data.csv | foreach { "INSERT INTO TABLE_NAME (COL1,COL2) VALUES ($($_.COL1),$($_.COL2));" >> inserts.sql } But I can't figure out how to add an additional tier into the pipeline to look if COL2 is not equal to 'NULL' and to quote it in such cases. How do I achieve such behavior?

    Read the article

  • Getting users LastLogonTime on Live@edu using powershell

    - by Eagles
    I am trying to get a csv file of all users in a Live@edu environment with a LastLogonTime, but I am having some issues here is my script: foreach ($i in (Get-Mailbox -ResultSize unlimited)) { Get-MailboxStatistics -LastLogonTime $i.DistinguishedName | where {$_.LastLogonTime} | select-object MailboxOwnerID,Name,LastLogonTime | export-csv -path "c:\filepath\UserLastLogon.csv" } I get the error: A positional paparameter cannot be found that accepts argument '[email protected],OU=domain.edu,OU=Microsoft Exchange Hosted Organizations,DC=prod,DC=exchangelabs,DC=com'. +Category Info: InvalidArgument: (:) [Get-MailboxStatistics], ParameterBindingException +FullyQualifiedErrorId : PositionalParameterNotFound,Get-MailboxStatistics Any help would be great!

    Read the article

  • SQL SERVER – Grouping by Multiple Columns to Single Column as A String

    - by pinaldave
    One of the most common questions I receive in email is how to group multiple column data in comma separate values in a single row grouping by another column. I have previously blogged about it in following two blog posts. However, both aren’t addressing the following exact problem. Comma Separated Values (CSV) from Table Column Comma Separated Values (CSV) from Table Column – Part 2 The question comes in many different formats but in following image I am demonstrating the same question in simple words. This is the most popular question on my Facebook page as well. (Example) Here is the sample script to build the sample dataset. CREATE TABLE TestTable (ID INT, Col VARCHAR(4)) GO INSERT INTO TestTable (ID, Col) SELECT 1, 'A' UNION ALL SELECT 1, 'B' UNION ALL SELECT 1, 'C' UNION ALL SELECT 2, 'A' UNION ALL SELECT 2, 'B' UNION ALL SELECT 2, 'C' UNION ALL SELECT 2, 'D' UNION ALL SELECT 2, 'E' GO SELECT * FROM TestTable GO Here is the solution which will build an answer to the above question. -- Get CSV values SELECT t.ID, STUFF( (SELECT ',' + s.Col FROM TestTable s WHERE s.ID = t.ID FOR XML PATH('')),1,1,'') AS CSV FROM TestTable AS t GROUP BY t.ID GO I hope this is an easy solution. I am going to point to this blog post in the future for all the similar questions. Final Clean Up Act -- Clean up DROP TABLE TestTable GO Here is the question back to you - Is there any better way to write above script? Please leave a comment and I will write a separate blog post with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL XML

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Twitter Integration in Windows 8

    - by Joe Mayo
    Glenn Versweyveld, @Depechie, blogged about Twitter Integration in Windows 8. The post describes how to use WinRtAuthorizer to perform OAuth authentication with LINQ to Twitter. If you’re using LINQ to Twitter with Windows 8, the WinRtAuthorizer is definitely the way to go. It lets you perform the entire OAuth dance with a single method call, which is a huge time savings and simplification of your code. In addition to Glenn’s excellent post, I’ve posted a sample app named MetroWinRtAuthorizerDemo.zip on the LINQ to Twitter Samples Page. @JoeMayo

    Read the article

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >