Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 26/66 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Using MS Standalone profiler in VS2008 Professional

    - by fishdump
    I am trying to profile my .NET dll while running it from VS unit testing tools but I am having problems. I am using the standalone command-line profiler as VS2008 Professional does not come with an inbuilt profiler. I have an open CMD window and have run the following commands (I instrumented it earlier which is why vsinstr gave the warning that it did): C:\...\BusinessRules\obj\Debug>vsperfclrenv /samplegclife /tracegclife /globalsamplegclife /globaltracegclife Enabling VSPerf Sampling Attach Profiling. Allows to 'attaching' to managed applications. Current Profiling Environment variables are: COR_ENABLE_PROFILING=1 COR_PROFILER={0a56a683-003a-41a1-a0ac-0f94c4913c48} COR_LINE_PROFILING=1 COR_GC_PROFILING=2 C:\...\BusinessRules\obj\Debug>vsinstr BusinessRules.dll Microsoft (R) VSInstr Post-Link Instrumentation 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Error VSP1018 : VSInstr does not support processing binaries that are already instrumented. C:\...\BusinessRules\obj\Debug>vsperfcmd /start:trace /output:foo.vsp Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. C:\...\BusinessRules\obj\Debug> I then ran the unit tests that exercised the instrumented code. When the unit tests were complete, I did... C:\...\BusinessRules\obj\Debug>vsperfcmd /shutdown Microsoft (R) VSPerf Command Version 9.0.30729 x86 Copyright (C) Microsoft Corp. All rights reserved. Waiting for process 4836 ( C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\vstesthost.exe) to shutdown... It was clearly waiting for VS2008 to close so I closed it... Shutting down the Profile Monitor ------------------------------------------------------------ C:\...\BusinessRules\obj\Debug> All looking good, there was a 3.2mb foo.vsp file in the directory. I next did... C:\...\BusinessRules\obj\Debug>vsperfreport foo.vsp /summary:all Microsoft (R) VSPerf Report Generator, Version 9.0.0.0 Copyright (C) Microsoft Corporation. All rights reserved. VSP2340: Environment variables were not properly set during profiling run and managed symbols may not resolve. Please use vsperfclrenv before profiling. File opened Successfully opened the file. A report file, foo_Header.csv, has been generated. A report file, foo_MarksSummary.csv, has been generated. A report file, foo_ProcessSummary.csv, has been generated. A report file, foo_ThreadSummary.csv, has been generated. Analysis completed A report file, foo_FunctionSummary.csv, has been generated. A report file, foo_CallerCalleeSummary.csv, has been generated. A report file, foo_CallTreeSummary.csv, has been generated. A report file, foo_ModuleSummary.csv, has been generated. C:\...\BusinessRules\obj\Debug> Notice the warning about environment variables and using vsperfclrenv? But I had run it! Maybe I used the wrong switches? I don't know. Anyway, loading the csv files into Excel or using the perfconsole tool gives loads of useful info with useless symbol names: *** Loading commands from: C:\temp\PerfConsole\bin\commands\timebytype.dll *** Adding command: timebytype *** Loading commands from: C:\temp\PerfConsole\bin\commands\partition.dll *** Adding command: partition Welcome to PerfConsole 1.0 (for bugs please email: [email protected]), for help type: ?, for a quickstart type: ?? > load foo.vsp *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Couldn't match to either expected sampled or instrumented profile schema, defaulting to sampled *** Profile loaded from 'foo.vsp' into @foo > > functions @foo >>>>> Function Name Exclusive Inclusive Function Name Module Name -------------------- -------------------- -------------- --------------- 900,798,600,000.00 % 900,798,600,000.00 % 0x0600003F 20397910 14,968,500,000.00 % 44,691,540,000.00 % 0x06000040 14736385 8,101,253,000.00 % 14,836,330,000.00 % 0x06000041 5491345 3,216,315,000.00 % 6,876,929,000.00 % 0x06000042 3924533 <snip> 71,449,430.00 % 71,449,430.00 % 0x0A000074 42572 52,914,200.00 % 52,914,200.00 % 0x0A000073 0 14,791.00 % 13,006,010.00 % 0x0A00007B 0 199,177.00 % 6,082,932.00 % 0x2B000001 5350072 2,420,116.00 % 2,420,116.00 % 0x0A00008A 0 836.00 % 451,888.00 % 0x0A000045 0 9,616.00 % 399,436.00 % 0x0A000039 0 18,202.00 % 298,223.00 % 0x06000046 1479900 I am so close to being able to find the bottlenecks, if only it will give me the function and module names instead of hex numbers! What am I doing wrong? --- Alistair.

    Read the article

  • How to make this .htaccess rule case insensitive?

    - by alex
    This is a rule in my .htaccess # those CSV files are under the DOCROOT ... so let's hide 'em <FilesMatch "\.CSV$"> Order Allow,Deny Deny from all </FilesMatch> I've noticed however that if there is a file with a lowercase or mixed case extension of CSV, it will be ignored by the rule and displayed. How do I make this case insensitive? I hope it doesn't come down to "\.(?:CSV|csv)$" (which I'm not sure would even work, and doesn't cover all bases) Note: The files are under the docroot, and are uploaded automatically there by a 3rd party service, so I'd prefer to implement a rule my end instead of bothering them. Had I set this site up though, I'd go for above the docroot. Thanks

    Read the article

  • Java applet - access denied to file on same web server

    - by me_here
    I've written a simple Java applet to generate a technical image based upon some data in a CSV file. I'm passing in the CSV file as a parameter to the applet: <applet code = "assaymap.AssayMapApplet" archive = "http://localhost/applet_test/AssayMap.jar" height="600px" width="800px"> <param name="csvFile" value="http://localhost/applet_test/test.csv"> </applet> As far as I understood applet security restrictions, an applet should be able to read data from the host they're on. These applets here http://www.jalview.org/examples/applets.html are using the same approach of passing in a text data file as a parameter. So I'm not sure why my own applet isn't working. The error message I get thrown is: Error with processing the CSV data. java.security.AccessControlException: access denied (java.io.FilePermission http:\localhost\applet_test\test.csv read)

    Read the article

  • MySQL INTO OUTFILE overide existing file?

    - by Derek Organ
    I've written a big sql script that creates a CSV file. I want to call a cronjob every night to create a fresh CSV file and have it available on the website. Say for example I'm store my file in '/home/sites/example.com/www/files/backup.csv' and my SQL is SELECT * INTO OUTFILE '/home/sites/example.com/www/files/backup.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM ( .... MySQL gives me an error when the file already exists File '/home/sites/example.com/www/files/backup.csv' already exists Is there a way to make MySQL overwrite the file? I could have PHP detect if the file exists and delete it before creating it again but it would be more succinct if I can do it directly in MySQL.

    Read the article

  • How can I get around MySQL Errcode 13 with SELECT INTO OUTFILE?

    - by Ryan Olson
    but I am trying to dump the contents of a table to a csv file using a MySQL SELECT INTO OUTFILE statement. If I do: SELECT column1, column2 INTO OUTFILE 'outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; outfile.csv will be created on the server in the same directory this database's files are stored in. However, when I change my query to: SELECT column1, column2 INTO OUTFILE '/data/outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; I get: ERROR 1 (HY000): Can't create/write to file '/data/outfile.csv' (Errcode: 13) Errcode 13 is a permissions error, even if I change ownership of /data to mysql:mysql and give it 777 permissions. MySQL is running as user "mysql". Strangely, I can create the file in /tmp, just not in any other directory I've tried, even with permissions set such that user mysql should be able to write to the directory. This is MySQL 5.0.75 running on Ubuntu.

    Read the article

  • Java 1.4 singleton containing a mutable field

    - by Philippe
    Hi, I'm working on a legacy Java 1.4 project, and I have a factory that instantiates a csv file parser as a singleton. In my csv file parser, however, I have a HashSet that will store objects created from each line of my CSV file. All that will be used by a web application, and users will be uploading CSV files, possibly concurrently. Now my question is : what is the best way to prevent my list of objects to be modified by 2 users ? So far, I'm doing the following : final class MyParser { private File csvFile = null; private static Set myObjects = Collections.synchronizedSet(new HashSet); public synchronized void setFile(File file) { this.csvFile = file; } public void parse() FileReader fr = null; try { fr = new FileReader(csvFile); synchronized(myObjects) { myObjects.clear(); while(...) { // foreach line of my CSV, create a "MyObject" myObjects.add(new MyObject(...)); } } } catch (Exception e) { //... } } } Should I leave the lock only on the myObjects Set, or should I declare the whole parse() method as synchronized ? Also, how should I synchronize - both - the setting of the csvFile and the parsing ? I feel like my actual design is broken because threads could modify the csv file several times while a possibly long parse process is running. I hope I'm being clear enough, because myself am a bit confused on those multi-synchronization issues. Thanks ;-)

    Read the article

  • How to write this snippet in Python?

    - by morpheous
    I am learning Python (I have a C/C++ background). I need to write something practical in Python though, whilst learning. I have the following pseudocode (my first attempt at writing a Python script, since reading about Python yesterday). Hopefully, the snippet details the logic of what I want to do. BTW I am using python 2.6 on Ubuntu Karmic. Assume the script is invoked as: script_name.py directory_path import csv, sys, os, glob # Can I declare that the function accepts a dictionary as first arg? def getItemValue(item, key, defval) return !item.haskey(key) ? defval : item[key] dirname = sys.argv[1] # declare some default values here weight, is_male, default_city_id = 100, true, 1 # fetch some data from a database table into a nested dictionary, indexed by a string curr_dict = load_dict_from_db('foo') #iterate through all the files matching *.csv in the specified folder for infile in glob.glob( os.path.join(dirname, '*.csv') ): #get the file name (without the '.csv' extension) code = infile[0:-4] # open file, and iterate through the rows of the current file (a CSV file) f = open(infile, 'rt') try: reader = csv.reader(f) for row in reader: #lookup the id for the code in the dictionary id = curr_dict[code]['id'] name = row['name'] address1 = row['address1'] address2 = row['address2'] city_id = getItemValue(row, 'city_id', default_city_id) # insert row to database table finally: f.close() I have the following questions: Is the code written in a Pythonic enough way (is there a better way of implementing it)? Given a table with a schema like shown below, how may I write a Python function that fetches data from the table and returns is in a dictionary indexed by string (name). How can I insert the row data into the table (actually I would like to use a transaction if possible, and commit just before the file is closed) Table schema: create table demo (id int, name varchar(32), weight float, city_id int); BTW, my backend database is postgreSQL

    Read the article

  • Python program to search for specific strings in hash values (coding help)

    - by Diego
    Trying to write a code that searches hash values for specific string's (input by user) and returns the hash if searchquery is present in that line. Doing this to kind of just learn python a bit more, but it could be a real world application used by an HR department to search a .csv resume database for specific words in each resume. I'd like this program to look through a .csv file that has three entries per line (id#;applicant name;resume text) I set it up so that it creates a hash, then created a string for the resume text hash entry, and am trying to use the .find() function to return the entire hash for each instance. What i'd like is if the word "gpa" is used as a search query and it is found in s['resumetext'] for three applicants(rows in .csv file), it prints the id, name, and resume for every row that has it.(All three applicants) As it is right now, my program prints the first row in the .csv file(print resume['id'], resume['name'], resume['resumetext']) no matter what the searchquery is, whether it's in the resumetext or not. lastly, are there better ways to doing this, by searching word documents, pdf's and .txt files in a folder for specific words using python (i've just started reading about the re module and am wondering if this may be the route, rather than putting everything in a .csv file.) def find_details(id2find): resumes_f=open("resume_data.csv") for each_line in resumes_f: s={} (s['id'], s['name'], s['resumetext']) = each_line.split(";") resumetext = str(s['resumetext']) if resumetext.find(id2find): return(s) else: print "No data matches your search query. Please try again" searchquery = raw_input("please enter your search term") resume = find_details(searchquery) if resume: print resume['id'], resume['name'], resume['resumetext']

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • How to lazy load a data structure (python)

    - by Anton Geraschenko
    I have some way of building a data structure (out of some file contents, say): def loadfile(FILE): return # some data structure created from the contents of FILE So I can do things like puppies = loadfile("puppies.csv") # wait for loadfile to work kitties = loadfile("kitties.csv") # wait some more print len(puppies) print puppies[32] In the above example, I wasted a bunch of time actually reading kitties.csv and creating a data structure that I never used. I'd like to avoid that waste without constantly checking if not kitties whenever I want to do something. I'd like to be able to do puppies = lazyload("puppies.csv") # instant kitties = lazyload("kitties.csv") # instant print len(puppies) # wait for loadfile print puppies[32] So if I don't ever try to do anything with kitties, loadfile("kitties.csv") never gets called. Is there some standard way to do this? After playing around with it for a bit, I produced the following solution, which appears to work correctly and is quite brief. Are there some alternatives? Are there drawbacks to using this approach that I should keep in mind? class lazyload: def __init__(self,FILE): self.FILE = FILE self.F = None def __getattr__(self,name): if not self.F: print "loading %s" % self.FILE self.F = loadfile(self.FILE) return object.__getattribute__(self.F, name) What might be even better is if something like this worked: class lazyload: def __init__(self,FILE): self.FILE = FILE def __getattr__(self,name): self = loadfile(self.FILE) # this never gets called again # since self is no longer a # lazyload instance return object.__getattribute__(self, name) But this doesn't work because self is local. It actually ends up calling loadfile every time you do anything.

    Read the article

  • Efficient and accurate way to compact and compare Python lists?

    - by daveslab
    Hi folks, I'm trying to a somewhat sophisticated diff between individual rows in two CSV files. I need to ensure that a row from one file does not appear in the other file, but I am given no guarantee of the order of the rows in either file. As a starting point, I've been trying to compare the hashes of the string representations of the rows (i.e. Python lists). For example: import csv hashes = [] for row in csv.reader(open('old.csv','rb')): hashes.append( hash(str(row)) ) for row in csv.reader(open('new.csv','rb')): if hash(str(row)) not in hashes: print 'Not found' But this is failing miserably. I am constrained by artificially imposed memory limits that I cannot change, and thusly I went with the hashes instead of storing and comparing the lists directly. Some of the files I am comparing can be hundreds of megabytes in size. Any ideas for a way to accurately compress Python lists so that they can be compared in terms of simple equality to other lists? I.e. a hashing system that actually works? Bonus points: why didn't the above method work?

    Read the article

  • Java giving incorrect year values

    - by whistler
    Something very, very strange is occurring in my program, and I'm wondering if anyone out there has seen this occur before. And, if so, how to fix it. Basically, I am parsing an csv file...no problem there. One column contains a date and I am taking it in as a String and changing to a Date object. Again, no problem there. The code is as follows: SimpleDateFormat dateFormat = new SimpleDateFormat("MM/dd/yy hh:mm"); Date initialDate = new Date(); try { initialDate = dateFormat.parse(rows.get(0)[8]); System.out.println(initialDate); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } Of course, I'm parsing other columns as well (and those are working fine). So, when I run my program for a small csv file (2.8 MB), the dates come out (i.e. are parsed) perfectly. However, when I run the program for a large csv file (25 MB), the dates are a hot mess. For example, take a look at the year values I am getting (the following is just a tiny portion of the println output from the code above): 1000264 at Sun Nov 05 15:30:00 EST 2186 1000320 at Sat Mar 04 17:30:00 EST 2169 1000347 at Sat Apr 01 09:45:00 EDT 2169 1000413 at Tue Jul 09 13:00:00 EDT 2182 1000638 at Fri Dec 11 13:45:00 EST 2167 1000667 at Wed Dec 10 10:00:00 EST 2188 1000690 at Mon Jan 02 13:00:00 EST 2169 1000843 at Thu Feb 11 13:30:00 EST 2196 In actuality, the years are in the realm of 1990-2006 or so. Again, this does not happen with the small csv file. Does anyone know what's going on here and how I can fix it? I need to process the large csv file (the small one was just for testing purposes). By request, here are the actual dates in the csv file and after that the value given by the code above: 5/20/03 15:30 5/20/03 15:30 8/30/04 9:00 8/30/04 9:00 12/20/04 10:30 12/20/04 10:30 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Thu Dec 08 09:00:00 EST 2196 Tue Dec 12 10:30:00 EST 2186 Tue Dec 12 10:30:00 EST 2186

    Read the article

  • Why is the page still caching even after the no-cache headers have been sent?

    - by Matthew Grasinger
    I've done a ton of research on this and have asked many people with help and still no success. Here are the details... I'm involved in developing a website that pulls data from various data files, combines them in a temp .csv file, and then is graphed using a popular graphing library: dygraphs. The bulk of the website is written in PHP. The parameters that determine the data that is graphed are stored in the users session, the .csv is named after the users session and available for download, and then the .csv file is written in a script that passes it to the dygraphs object. And we've found, even with the no-cache headers sent: header("Cache-Control: no-cache, must-revalidate"); header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); Many users experience in the middle of a session, (if enough different graphs are generated) the page displaying an older, static rendering of the page (data they had graphed earlier in the session) as if it were cached and loaded instead of getting a new request. It only gets weirder though: I've checked using developer tools in both Firefox and Chrome and both browsers are receiving the no-cache headers just fine; Even when the problem occurs if you view the page source, the source is the correct content (a table/legend is also dynamically created using php, the source shows the correct table, but what is rendered is older content); the page begins to render correctly until the graph is about to be display, and then shows the older content; the older content displays as if it were a completely static overlay--the cached graph does not have the same dynamic features (roll over data point display, zoom and pan, etc.) And it is as if the correct page were somewhere beneath it (the download button for the csv file moves depending on how large the table is. The older, static page does nothing if you click the download .csv button, but if you can manage to find the one in the page beneath it you can click and still download the .csv. The data in the .csv is correct) It is one of the strangest things I've seen in development thus far. Some other relevant facts are that all the problems I've personally experience occurred while I was using Chrome. Non of these symptoms have been reported by Firefox users. IE users have had the same problems (IE users are forced to use chrome frame). I'm at my wits end at this point. We've sent the php headers; we've tried setting the cache profile for php on IIS as "DisableCache" (or whatever); we've tried sending a random query string to the results page; we've tried all the appropriate meta tags--all with no success.

    Read the article

  • Why does Windows XP (during a rename operation) report file already exists when it doesn't?

    - by Hawk
    From the command-line: E:\menu\html\tom\val\.svn\tmp\text-base>ver Microsoft Windows [Version 5.2.3790] E:\menu\html\tom\val\.svn\tmp\text-base>dir Volume in drive E is DATA Volume Serial Number is F047-F44B Directory of E:\menu\html\tom\val\.svn\tmp\text-base 12/23/2010 04:36 PM <DIR> . 12/23/2010 04:36 PM <DIR> .. 12/23/2010 04:01 PM 0 wtf.com3.csv.svn-base 1 File(s) 0 bytes 2 Dir(s) 170,780,262,400 bytes free E:\menu\html\tom\val\.svn\tmp\text-base>rename wtf.com3.csv.svn-base com3.csv.svn-base A duplicate file name exists, or the file cannot be found. E:\menu\html\tom\val\.svn\tmp\text-base>dir Volume in drive E is DATA Volume Serial Number is F047-F44B Directory of E:\menu\html\tom\val\.svn\tmp\text-base 12/23/2010 04:36 PM <DIR> . 12/23/2010 04:36 PM <DIR> .. 12/23/2010 04:01 PM 0 wtf.com3.csv.svn-base 1 File(s) 0 bytes 2 Dir(s) 170,753,064,960 bytes free E:\menu\html\tom\val\.svn\tmp\text-base>` I don't know what to do about this, as there is no other file in this directory. Why does Windows XP report that there is already a file here named com3.csv.svn-base when there is clearly no other file here?

    Read the article

  • Broken characters in filenames only in some directories

    - by Kaivosukeltaja
    We have a web server running CentOS 5.8 that uses SVN for version control. When trying to switch to the latest revision, we got an error about the filenames of files in an upload directory: svn: Error converting entry in directory 'adm/emails/upload' to UTF-8 svn: Valid UTF-8 data (hex: 54 79) followed by invalid UTF-8 sequence (hex: f6 6b 69 72) Upon investigating, we noticed there were some files that had broken filenames: $ ls ~/public_html/adm/emails/upload/ Ty?el?m?trendit.csv Ty?kirja1.csv To get the update completed quickly, we simply mved the files into our home directory. Surprisingly, their filenames looked fine in their new location: $ ls ~/ Työelämätrendit.csv Työkirja1.csv After the update we moved them back to where they were and their filenames were broken again. What could cause this and how can we fix it? The system's locale is set to LANG=en_US.UTF-8.

    Read the article

  • Regarding Unix Move Command

    - by user38993
    I need to write an Unix Shell Script tran.sh that moves the csv input files from /exp/files folder to /exp/ready directory. The csv input files are written to /exp/files folder by an FTP server whose behavior I cannot trivially change. In tran.sh shell script I need to ensure before doing a move of that csv input file from /exp/files directory no longer any other process is writing to the file. How can I do it.

    Read the article

  • Powershell: Conditionally changing objects in the pipeline

    - by axk
    I'm converting a CSV to SQL inserts and there's a null-able text column which I need to quote in case it is not NULL. I would write something like the following for the conversion: Import-Csv data.csv | foreach { "INSERT INTO TABLE_NAME (COL1,COL2) VALUES ($($_.COL1),$($_.COL2));" >> inserts.sql } But I can't figure out how to add an additional tier into the pipeline to look if COL2 is not equal to 'NULL' and to quote it in such cases. How do I achieve such behavior?

    Read the article

  • Getting users LastLogonTime on Live@edu using powershell

    - by Eagles
    I am trying to get a csv file of all users in a Live@edu environment with a LastLogonTime, but I am having some issues here is my script: foreach ($i in (Get-Mailbox -ResultSize unlimited)) { Get-MailboxStatistics -LastLogonTime $i.DistinguishedName | where {$_.LastLogonTime} | select-object MailboxOwnerID,Name,LastLogonTime | export-csv -path "c:\filepath\UserLastLogon.csv" } I get the error: A positional paparameter cannot be found that accepts argument '[email protected],OU=domain.edu,OU=Microsoft Exchange Hosted Organizations,DC=prod,DC=exchangelabs,DC=com'. +Category Info: InvalidArgument: (:) [Get-MailboxStatistics], ParameterBindingException +FullyQualifiedErrorId : PositionalParameterNotFound,Get-MailboxStatistics Any help would be great!

    Read the article

  • SQL SERVER – Grouping by Multiple Columns to Single Column as A String

    - by pinaldave
    One of the most common questions I receive in email is how to group multiple column data in comma separate values in a single row grouping by another column. I have previously blogged about it in following two blog posts. However, both aren’t addressing the following exact problem. Comma Separated Values (CSV) from Table Column Comma Separated Values (CSV) from Table Column – Part 2 The question comes in many different formats but in following image I am demonstrating the same question in simple words. This is the most popular question on my Facebook page as well. (Example) Here is the sample script to build the sample dataset. CREATE TABLE TestTable (ID INT, Col VARCHAR(4)) GO INSERT INTO TestTable (ID, Col) SELECT 1, 'A' UNION ALL SELECT 1, 'B' UNION ALL SELECT 1, 'C' UNION ALL SELECT 2, 'A' UNION ALL SELECT 2, 'B' UNION ALL SELECT 2, 'C' UNION ALL SELECT 2, 'D' UNION ALL SELECT 2, 'E' GO SELECT * FROM TestTable GO Here is the solution which will build an answer to the above question. -- Get CSV values SELECT t.ID, STUFF( (SELECT ',' + s.Col FROM TestTable s WHERE s.ID = t.ID FOR XML PATH('')),1,1,'') AS CSV FROM TestTable AS t GROUP BY t.ID GO I hope this is an easy solution. I am going to point to this blog post in the future for all the similar questions. Final Clean Up Act -- Clean up DROP TABLE TestTable GO Here is the question back to you - Is there any better way to write above script? Please leave a comment and I will write a separate blog post with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL XML

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Regex gurus! here's a teaser: mixed thousands separators and csv's

    - by chichilatte
    I've got a string like... "labour 18909, liberals 12,365,conservatives 14,720" ...and i'd like a regex which can get rid of any thousands separators so i can pull out the numbers easily. Or even a regex which could give me a tidy array like: (labour => 18909, liberals => 12365, conservatives => 14720) Oh i wish i had the time to figure out regexes! Maybe i'll buy one as a toilet book, mmm.

    Read the article

  • Why am I getting this error : "ExecuteReader: Connection property has not been initialized." [migrated]

    - by Olga
    I'm trying to read .csv file to import its contents to SQL table I'm getting error: ExecuteReader: Connection property has not been initialized. at the last line of this code: Function ImportData(ByVal FU As FileUpload, ByVal filename As String, ByVal tablename As String) As Boolean Try Dim xConnStr As String = "Driver={Microsoft Text Driver (*.txt; *.csv)};dbq=" & Path.GetDirectoryName(Server.MapPath(filename)) & ";extensions=asc,csv,tab,txt;" ' create your excel connection object using the connection string Dim objXConn As New System.Data.Odbc.OdbcConnection(xConnStr.Trim()) objXConn.Open() Dim objCommand As New OdbcCommand(String.Format("SELECT * FROM " & Path.GetFileName(Server.MapPath(filename)), objXConn)) If objXConn.State = ConnectionState.Closed Then objXConn.Open() Else objXConn.Close() objXConn.Open() End If ' create a DataReader Dim dr As OdbcDataReader dr = objCommand.ExecuteReader()

    Read the article

  • DNSCrypt-Proxy specific usage

    - by trekkiejonny
    I have some specific usage questions about DNSCrypt-Proxy. I followed a guide and ended with the command dnscrypt-proxy --daemonize –user=dnscrypt Without any further switches does it default to using OpenDNS's resolver server? I want to use specific servers, I found reference to pointing it to the full path of a CSV file. What is the CSV format for use with DNSCrypt? Would it be "address:port,public key"? Does it go in order of working addresses? For example, first resolver doesn't connect it moves on to the second line of the CSV, then third etc. Lastly, would this question be more appropriate in another StackExchange section?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >