Search Results

Search found 3923 results on 157 pages for 'ldn tech exec'.

Page 93/157 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Is there a software like AutoCAD, but only 2D, not 3D?

    - by Saeed Neamati
    I love AutuCAD. It's powerful, easy to use, and fits engineering mind. It rocks when you want to draw 2D diagrams and shapes and one the cool features I like about it, is its snapping power. You draw a simple rectangle, then you can very easily create a circumscribe circle around it. However, new version of it are becoming really heavy (in size) and need high tech hardware, because of its powerful 3D capabilities, and rendering engines coming with it. I only want the 2D part of AutoCAD. Is there any software out there, which resembles AutoCAD (like having a console, where you can type L and get the line tool, or having snapping capability, etc.), but is small in size? In other words, what software can be named as AutoCAD 2D alternative?

    Read the article

  • How do I debug an upstart job?

    - by Cerales
    I have the following job in /etc/init/collector: start on runlevel [2345] stop on runlevel [!2345] expect daemon exec /usr/bin/twistd -y /path/to/my/tac/file When I start the job with sudo service collector start, it hangs. If I ctrl-c and run initctl list, I see this: collector start/killed, process 616 I can't see an instance of the twistd daemon in ps, and the HTTP server it's supposed to be providing does not exist. I even tried this without 'expect daemon' and with a simple call to a one-line bash script using a script stanza, and it still doesn't work. I think I'm doing something very wrong. What could it be?

    Read the article

  • How to use proxy on Virtual Box (Windows XP)

    - by John
    I'm not exactly the most tech savy so please bear with me. I am trying to connect through a proxy with my Virtualbox accounts. Initially I wanted to just set the browsers in each virtualbox under a proxy but my boss said that it can probably be done right off the bat under the virtualbox accounts itself. Could anyone guide me on how to do this? All the instructions online state that their should be a proxy setting in preferences however all I see is a Network Adapter tab. Thanks!

    Read the article

  • VMware Fusion on MacBook Pro running Vista, not working after installing a wireless card

    - by Shelley
    A tech friend installed VMware on my Mac so I could use my Windows programs as well. It worked great until I inserted my Sierra 881 USB wireless card while in VMware trying to get to the internet while on the road. It worked briefly, then the AT&T Communication manager won't respond when you click on the icon, I can't open Network and Sharing center, I can't sync my palm - shows it can't connect. Looks like it messed up several things, along with not being able to connect to the internet while in windows. This wireless cards works directly from the Mac - but I need internet while in Windows for some work I am doing. How do I uninstall this - when I try - it just gives me an endless reloading circle - not doing anything. I really need to sync my palm and get back to the way it was. I don't know much about VMware at all and I don't have access to my friend right now to get help.

    Read the article

  • Tar an gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • Mac OS X: How to change the color label of files from the Terminal

    - by Svish
    Is there a way I can set the color label of a file to some color when in the Terminal? I know that the following command lists some info about what the color currently is, but I can't figure out how to do something about it. Like change it. mdls -name kMDItemFSLabel somefile.ext The reason I would like to know is that I want to recursively mark all files in a folder of a certain type with a certain color label (in my case gray). I know how to do the finding: find . -name "*.ext" And I know how I can run the command afterwards for each file using -exec, but I need to know how to do the actual labeling... I would like a solution that only involves commands built-in to Mac OS X. So preferably no 3rd party stuff, unless there is no other way.

    Read the article

  • How can I access cpu temperature readings using c#?

    - by Paul
    For a programming project I would like to access the temperature readings from my CPU and GPUs. I will be using C#. From various forums I get the impression that there is specific information and developer resources you need in order to access that information for various boards. I have a MSI NF750-G55 board. MSI's website does not have any of the information I am looking for. I tried their tech support and the rep I spoke with stated they do not have any such information. There must be a way to obtain that info. Any thoughts?

    Read the article

  • Our company has 100,000s+ photos, how to store and browse/find these efficiently?

    - by tobefound
    We currently store our photos in a structure like this: folder\1\10000 - 19999.JPG|ORF|TIF (10 000 files) folder\2\20000 - 29999.JPG|ORF|TIF (10 000 files) etc... They are stored on 4 different 2TB D-link NASes attached and shared on our office network (\\nas1, \\nas2, and so on...) Problems: 1) When a client (Windows only, Vista and 7) wishes to browse the let's say \\nas1\folder\1\ folder, performance is quite poor. A problem. List takes a long time to generate in explorer window. Even with icons turned off. 2) Initial access to the NAS itself is sometimes slow. Problem. SAN disks too expensive for us. Even with iSCSI interface/switch technology. I've read a lot of tech pages saying that storing 100 000+ files in one single folder shouldn't be a problem. But we don't dare go there now that we experience problems on a 10K level. All input greatly appreciated, /T

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • RUNIT - created first service directory, "sv start testrun" does not work

    - by Veseliq
    I'm pretty new to runit. I installed it on a Ubuntu host. What I did: 1) created a dir testrun in /etc/sv 2) created a script run in /etc/sv/testrun/run, the script content: #! /bin/bash exec /root/FP/annotate-output python /root/FP/test.py | logger -t svtest 3) If I call directly /etc/sv/testrun/run it executes successfully 4) I run sv start testrun (or sv run testrun, sv restart testrun), all of them end up with the same error msg: fail: sv: unable to change to service directory: file does not exist Any ideas what am I doing wrong? I'm new to runit and base all my actions on the information found here: http://smarden.org/runit/

    Read the article

  • Automate hashing for each file in a folder?

    - by Kennie R.
    I have quite a few FTP folders, and I add a few each month and prefer to leave some sort of method of verifying their integrity, for example the files MD5SUMS, SHA256SUMS, ... which I could create using a script. Take for example: find ./ -type f -exec md5sum $1 {} \; This works fine, but when I run it each time for each shaxxx sum afterwards, it creates a sum of the MD5SUMs file which is really not wanted. Is there a simpler way, or script, or common way of hashing all the files in to their sums file without causing problems like that? I could really use a better option.

    Read the article

  • peer to peer disk image transmission

    - by JackWu
    Installing linux/windows through pxe works smoothly for me. But downloading images(especially windows) is a headache. Let alone the time, bandwith usage is horrible. And p2p tech comes to my mind. But I have no clue how it works or where to start. Anyone knows how to setup p2p local network, and applies that on image transmission? Any advice, tutorials or experiences will be great. Thanks in advance.

    Read the article

  • Puppet, Windows, and UAC

    - by usedTobeaMember
    Is it possible to use Puppet on Windows where UAC is enabled? Does Puppet have any method for automatically saying yes to UAC prompts for a software install? My module does the following workflow: Downloads an MSI file locally from the Puppetmaster Creates a local batch file using template function which does cmd.exe /c msifile.msi /i /quiet .... Runs bat file in an Exec. Unfortunately, it fails due to UAC, I am wondering how people are working around UAC in their Windows Puppet environments. The Puppet documentation seems to only talk about Puppet's own executable in regards to UAC.

    Read the article

  • Run a script as root from apache

    - by Lord Loh.
    I would like to update my hosts file and restart dnsmasq from a web interface (php/apache2). I tried playing around with suid bits (the demonstaration). I have both apache and dnsmasq running on an EC2 instance. I understand that Linux ignores the setuid bit on text scripts, but works on binary files. (Have I got something wrong?). I added exec("whoami"); to the example C program in Wikipedia. Although the effective UID of the C program is 0, whoami does not return root :-( I would thoroughly like to avoid echo password | sudo service dnsmasq restart or adding apache to the sudoers without password! Is there a way out? How does webmin do such things?

    Read the article

  • What's an alternative to using public folders (in Outlook)?

    - by Ivo Flipse
    My colleagues abuse our mail servers public folders to store (old) emails so that everyone can read them using IMAP. I'm looking into good alternatives after reading this Tech Republic article: "10 reasons why you should begin phasing out Exchange public folders" The most important thing they need is access to emails from multiple computers without overloading our network. So do you have any suggestions for alternatives? If there's a nice combination with some CRM system it would be interesting too. Note: this doesn't have to be freeware, usability and efficiency are more important. The solution has to be Windows 32 bit only

    Read the article

  • Problems using "at" with Apache

    - by Alex Padgett
    I'm trying to use a PHP script to create at jobs, but when it comes time to execute the jobs, nothing seems to be happening. I've tried to output any errors to log files, but have had no luck. It seems obvious that it's a permissions issue, because when I set apache to run as my personal user, everything works fine. However, when I exec wget directly from PHP, everything works fine so it seems that apache has the correct permissions to use it. The problem appears to be when using at in conjunction with apache. So I need to find a way to make this work with apache running as its own user. Here is the command I'm using: echo "wget -qO- http://example.com/" | at now + 1 minute 2>&1 Any ideas? EDIT: Apache can create the at jobs, it just seems that when they execute nothing is happening.

    Read the article

  • correct format for datetime appended to filename

    - by jhayes
    I'm trying to setup a batch file to execute a set of stored procs and dump the output to a timestamped text file. I'm having problems finding the correct format for the timestamp. Here is what I'm using osql.exe -S <server> -E -Q "EXEC <stored procedure> " -o "c:\filename_%date:~-0,10%_%time:~-0,10%.txt" The error I get is: Cannot open output file - x:\filename_Thu 06/25/_16:26:43.1.txt No such file or directory I can't find the documentation and I've played around with it but can't find the correct format.

    Read the article

  • Possible to run system-config-network during %post kickstart?

    - by tore-
    Hello, I'm currently trying to figure out a smart way to set IP, hostname, gateway and DNS settings during a kickstart (with user input during the kickstart). Doing this with firstboot after the install is not acceptable, so this must be done during %post. I've tried to run the system-config-network tool during post in tty6: #!/bin/sh chvt 6 exec < /dev/tty6 > /dev/tty6 /usr/bin/system-config-network-tui This doesn't work since for some reason I'm unable to catch user input. I would prefer to use built in tools to change this during post, rather than write my own bash script to do this, since using the provided tools is less likely to break anything. Does someone done anything like this or similar, and made it work? Thanks

    Read the article

  • (date) script for freenas

    - by malgaboy
    My webserver is running freenas 8.0.4. Currently in /etc/crontab there is this task (daily) find /mnt/vol1/1 -type f -mtime +16 -exec rm -R {} \; This allow me to delete files older than 16 days.. infact every night a new file is added in /mnt/vol1/1 folder.. but now I wanna make a script that check the folder and keeps only the youngest 2 file.. whitout taking in consideration the days.. Thanks in advance. (sorry for my bad english).

    Read the article

  • Accessing Oracle DB through SQL Server using OPENROWSET

    - by Ken Paul
    I'm trying to access a large Oracle database through SQL Server using OPENROWSET in client-side Javascript, and not having much luck. Here are the particulars: A SQL Server view that accesses the Oracle database using OPENROWSET works perfectly, so I know I have valid connection string parameters. However, the new requirement is for extremely dynamic Oracle queries that depend on client-side selections, and I haven't been able to get dynamic (or even parameterized) Oracle queries to work from SQL Server views or stored procedures. Client-side access to the SQL Server database works perfectly with dynamic and parameterized queries. I cannot count on clients having any Oracle client software. Therefore, access to the Oracle database has to be through the SQL Server database, using views, stored procedures, or dynamic queries using OPENROWSET. Because the SQL Server database is on a shared server, I'm not allowed to use globally-linked databases. My idea was to define a function that would take my own version of a parameterized Oracle query, make the parameter substitutions, wrap the query in an OPENROWSET, and execute it in SQL Server, returning the resulting recordset. Here's sample code: // db is a global variable containing an ADODB.Connection opened to the SQL Server DB // rs is a global variable containing an ADODB.Recordset . . . ss = "SELECT myfield FROM mytable WHERE {param0} ORDER BY myfield;"; OracleQuery(ss,["somefield='" + somevalue + "'"]); . . . function OracleQuery(sql,params) { var s = sql; var i; for (i = 0; i < params.length; i++) s = s.replace("{param" + i + "}",params[i]); var e = "SELECT * FROM OPENROWSET('MSDAORA','(connect-string-values)';" + "'user';'pass','" + s.split("'").join("''") + "') q"; try { rs.Open("EXEC ('" + e.split("'").join("''") + "')",db); } catch (eobj) { alert("SQL ERROR: " + eobj.description + "\nSQL: " + e); } } The SQL error that I'm getting is Ad hoc access to OLE DB provider 'MSDAORA' has been denied. You must access this provider through a linked server. which makes no sense to me. The Microsoft explanation for this error relates to a registry setting (DisallowAdhocAccess). This is set correctly on my PC, but surely this relates to the DB server and not the client PC, and I would expect that the setting there is correct since the view mentioned above works. One alternative that I've tried is to eliminate the enclosing EXEC in the Open statement: rs.Open(e,db); but this generates the same error. I also tried putting the OPENROWSET in a stored procedure. This works perfectly when executed from within SQL Server Management Studio, but fails with the same error message when the stored procedure is called from Javascript. Is what I'm trying to do possible? If so, can you recommend how to fix my code? Or is a completely different approach necessary? Any hints or related information will be welcome. Thanks in advance.

    Read the article

  • Using windows command line from Pascal

    - by Jordan
    I'm trying to use some windows command line tools from within a short Pascal program. To make it easier, I'm writing a function called DoShell which takes a command line string as an argument and returns a record type called ShellResult, with one field for the process's exitcode and one field for the process's output text. I'm having major problems with some standard library functions not working as expected. The DOS Exec() function is not actually carrying out the command i pass to it. The Reset() procedure gives me a runtime error RunError(2) unless i set the compiler mode {I-}. In that case i get no runtime error, but the Readln() functions that i use on that file afterwards don't actually read anything, and furthermore the Writeln() functions used after that point in the code execution do nothing as well. Here's the source code of my program so far. I'm using Lazarus 0.9.28.2 beta, with Free Pascal Compiler 2.24 program project1; {$mode objfpc}{$H+} uses Classes, SysUtils, StrUtils, Dos { you can add units after this }; {$IFDEF WINDOWS}{$R project1.rc}{$ENDIF} type ShellResult = record output : AnsiString; exitcode : Integer; end; function DoShell(command: AnsiString): ShellResult; var exitcode: Integer; output: AnsiString; exepath: AnsiString; exeargs: AnsiString; splitat: Integer; F: Text; readbuffer: AnsiString; begin //Initialize variables exitcode := 0; output := ''; exepath := ''; exeargs := ''; splitat := 0; readbuffer := ''; Result.exitcode := 0; Result.output := ''; //Split command for processing splitat := NPos(' ', command, 1); exepath := Copy(command, 1, Pred(splitat)); exeargs := Copy(command, Succ(splitat), Length(command)); //Run command and put output in temporary file Exec(FExpand(exepath), exeargs + ' __output'); exitcode := DosExitCode(); //Get output from file Assign(F, '__output'); Reset(F); Repeat Readln(F, readbuffer); output := output + readbuffer; readbuffer := ''; Until Eof(F); //Set Result Result.exitcode := exitcode; Result.output := output; end; var I : AnsiString; R : ShellResult; begin Writeln('Enter a command line to run.'); Readln(I); R := DoShell(I); Writeln('Command Exit Code:'); Writeln(R.exitcode); Writeln('Command Output:'); Writeln(R.output); end.

    Read the article

  • Using DateTime in a SqlParameter for Stored Procedure, format error

    - by Matt
    I'm trying to call a stored procedure (on a SQL 2005 server) from C#, .NET 2.0 using DateTime as a value to a SqlParameter. The SQL type in the stored procedure is 'datetime'. Executing the sproc from SQL Management Studio works fine. But everytime I call it from C# I get an error about the date format. When I run SQL Profiler to watch the calls, I then copy paste the exec call to see whats going on. These are my observations and notes about what I've attempted: 1) If I pass the DateTime in directly as a DateTime or converted to SqlDateTime, the field is surrounding by a PAIR of single quotes, such as @Date_Of_Birth=N''1/8/2009 8:06:17 PM'' 2) If I pass the DateTime in as a string, I only get the single quotes 3) Using SqlDateTime.ToSqlString() does not result in a UTC formatted datetime string (even after converting to universal time) 4) Using DateTime.ToString() does not result in a UTC formatted datetime string. 5) Manually setting the DbType for the SqlParameter to DateTime does not change the above observations. So, my questions then, is how on earth do I get C# to pass the properly formatted time in the SqlParameter? Surely this is a common use case, why is it so difficult to get working? I can't seem to convert DateTime to a string that is SQL compatable (e.g. '2009-01-08T08:22:45') EDIT RE: BFree, the code to actually execute the sproc is as follows: using (SqlCommand sprocCommand = new SqlCommand(sprocName)) { sprocCommand.Connection = transaction.Connection; sprocCommand.Transaction = transaction; sprocCommand.CommandType = System.Data.CommandType.StoredProcedure; sprocCommand.Parameters.AddRange(parameters.ToArray()); sprocCommand.ExecuteNonQuery(); } To go into more detail about what I have tried: parameters.Add(new SqlParameter("@Date_Of_Birth", DOB)); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime())); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime().ToString())); SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB.ToUniversalTime(); parameters.Add(param); SqlParameter param = new SqlParameter("@Date_Of_Birth", SqlDbType.DateTime); param.Value = new SqlDateTime(DOB.ToUniversalTime()); parameters.Add(param); parameters.Add(new SqlParameter("@Date_Of_Birth", new SqlDateTime(DOB.ToUniversalTime()).ToSqlString())); Additional EDIT The one I thought most likely to work: SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB; Results in this value in the exec call as seen in the SQL Profiler @Date_Of_Birth=''2009-01-08 15:08:21:813'' If I modify this to be @Date_Of_Birth='2009-01-08T15:08:21' It works, but it won't parse with pair of single quotes, and it wont convert to a datetime correctly with the space between the date and time and with the milliseconds on the end. Update and Success First and foremost, thank you everyone for the answers. I post this for the sake of completeness and accuracy on SO - because I certainly do not do it for my pride... I had copy/pasted the code above after the request from below. I trimmed things here and there to be concise. Turns out my problem was in the code I left out, which I'm sure any one of you would have spotted in an instant. I had wrapped my sproc calls inside a transaction. Turns out that I was simply not doing transaction.Commit()!!!!! I'm ashamed to say it, but there you have it. I still don't know what's going on with the syntax I get back from the profiler. A coworker watched with his own instance of the profiler from his computer, and it returned proper syntax. Watching the very SAME executions from my profiler showed the incorrect syntax. It acted as a red-herring, making me believe there was a query syntax problem instead of the much more simple and true answer, which was that I need to commit the transaction! I marked an answer below as correct, and threw in some up-votes on others because they did, after all, answer the question, even if they didn't fix my specific (brain lapse) issue. Thanks again for the help.

    Read the article

  • How can I get the output of a command terminated by a alarm() call in Perl?

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • Getting a Temporary Table Returned from from Dynamic SQL in SQL Server 05, and parsing

    - by gloomy.penguin
    So I was requested to make a few things.... (it is Monday morning and for some reason this whole thing is turning out to be really hard for me to explain so I am just going to try and post a lot of my code; sorry) First, I needed a table: CREATE TABLE TICKET_INFORMATION ( TICKET_INFO_ID INT IDENTITY(1,1) NOT NULL, TICKET_TYPE INT, TARGET_ID INT, TARGET_NAME VARCHAR(100), INFORMATION VARCHAR(MAX), TIME_STAMP DATETIME DEFAULT GETUTCDATE() ) -- insert this row for testing... INSERT INTO TICKET_INFORMATION (TICKET_TYPE, TARGET_ID, TARGET_NAME, INFORMATION) VALUES (1,1,'RT_ID','IF_ID,int=1&IF_ID,int=2&OTHER,varchar(10)=val,ue3&OTHER,varchar(10)=val,ue4') The Information column holds data that needs to be parsed into a table. This is where I am having problems. In the resulting table, Target_Name needs to become a column that holds Target_ID as a value for each row in the resulting table. The string that needs to be parsed is in this format: @var_name1,@var_datatype1=@var_value1&@var_name2,@var_datatype2=@var_value2&@var_name3,@var_datatype3=@var_value3 And what I ultimately need as a result (in a table or table variable): RT_ID IF_ID OTHER 1 1 val,ue3 1 2 val,ue3 1 1 val,ue4 1 2 val,ue4 And I need to be able to join on the result. Initially, I was just going to make this a function that returns a table variable but for some reason I can't figure out how to get it into an actual table variable. Whatever parses the string needs to be able to be used directly in queries so I don't think a stored procedure is really the right thing to be using. This is the code that parses the Information string... it returns in a temporary table. -- create/empty temp table for var_name, var_type and var_value fields if OBJECT_ID('tempdb..#temp') is not null drop table #temp create table #temp (row int identity(1,1), var_name varchar(max), var_type varchar(30), var_value varchar(max)) -- just setting stuff up declare @target_name varchar(max), @target_id varchar(max), @info varchar(max) set @target_name = (select target_name from ticket_information where ticket_info_id = 1) set @target_id = (select target_id from ticket_information where ticket_info_id = 1) set @info = (select information from ticket_information where ticket_info_id = 1) --print @info -- some of these variables are re-used later declare @col_type varchar(20), @query varchar(max), @select as varchar(max) set @query = 'select ' + @target_id + ' as ' + @target_name + ' into #target; ' set @select = 'select * into ##global_temp from #target' declare @var_name varchar(100), @var_type varchar(100), @var_value varchar(100) declare @comma_pos int, @equal_pos int, @amp_pos int set @comma_pos = 1 set @equal_pos = 1 set @amp_pos = 0 -- while loop to parse the string into a table while @amp_pos < len(@info) begin -- get new comma position set @comma_pos = charindex(',',@info,@amp_pos+1) -- get new equal position set @equal_pos = charindex('=',@info,@amp_pos+1) -- set stuff that is going into the table set @var_name = substring(@info,@amp_pos+1,@comma_pos-@amp_pos-1) set @var_type = substring(@info,@comma_pos+1,@equal_pos-@comma_pos-1) -- get new ampersand position set @amp_pos = charindex('&',@info,@amp_pos+1) if @amp_pos=0 or @amp_pos<@equal_pos set @amp_pos = len(@info)+1 -- set last variable for insert into table set @var_value = substring(@info,@equal_pos+1,@amp_pos-@equal_pos-1) -- put stuff into the temp table insert into #temp (var_name, var_type, var_value) values (@var_name, @var_type, @var_value) -- is this a new field? if ((select count(*) from #temp where var_name = (@var_name)) = 1) begin set @query = @query + ' create table #' + @var_name + '_temp (' + @var_name + ' ' + @var_type + '); ' set @select = @select + ', #' + @var_name + '_temp ' end set @query = @query + ' insert into #' + @var_name + '_temp values (''' + @var_value + '''); ' end if OBJECT_ID('tempdb..##global_temp') is not null drop table ##global_temp exec (@query + @select) --select @query --select @select select * from ##global_temp Okay. So, the result I want and need is now in ##global_temp. How do I put all of that into something that can be returned from a function (or something)? Or can I get something more useful returned from the exec statement? In the end, the results of the parsed string need to be in a table that can be joined on and used... Ideally this would have been a view but I guess it can't with all the processing that needs to be done on that information string. Ideas? Thanks!

    Read the article

  • How to Get The Output Of a command terminated by a alarm() call.

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >