Search Results

Search found 17233 results on 690 pages for 'download speed'.

Page 78/690 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • iPhone SDK: Downloading large files from a server into the app's documents.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's documents so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • Curl download image only if not older than 2 days

    - by mark
    I want to download a image from a remote server only if it is not older than 2 days. I have the constructing as below now. Is this right? I want to now the last_modified data first before downloading. $ch = curl_init($file_source); // the file we are downloading curl_setopt($ch, CURLOPT_TIMEOUT, 20); curl_setopt($ch, CURLOPT_FILE, $wh); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_FILETIME, true); curl_exec($ch); $headers = curl_getinfo($ch); $last_modified = $headers['filetime']; if ($last_modified != -1) { // unknown echo date("Y-m-d", $last_modified); //etc } curl_close($ch); fclose($wh);

    Read the article

  • disable download of my paid app in Android

    - by Boy
    I have a paid app in the store which will remove the ads in another app when it is installed on that device. Now I want to remove this 'remove ads' app, as I want to have an in-app payement for this for instance (or maybe I just keep the ads version only). But the problem is, if I unpublish the app, people who bought it will not be able to download it again when they get a new phone or reset their phone. How to I keep the app in the Play Store, but prevent people from buying it? Is this possible? My backup plan is: make the app cost 10.000 euro's and put in the message that this app should not be bought anymore. But I don't like that...

    Read the article

  • How do I determine the video file size on youtube in Java?

    - by user1753343
    I am using the youtube-API to gather different information about videos. The only missing attribute until now is size. The API itself doesn't provide any functionality. I googled, but didn't found any solution. Indirect way My next idea was to get the path to the video-file itself and make a get-request. In the response-headers I could check for the file size. So I searched for "video / download / youtube / java". Some time ago youtube used get_video_info but this doesn't work today. I also found an application called JavaYoutubeDownloader but it seems VERY complicated for just getting the file size and it doesn't work either (just prints finish, without downloading anything). So is there a way to get the filesize of a video on youtube by using Java? If not, what would be a practical solution for this problem (a list of video_ids exists)?

    Read the article

  • download a complete folder

    - by Christian
    Hi, in my app I use several png-graphics. For the present version they are installed with the app in the folder "graphic". I made while programming. Now I need some more png-graphics and I don't want to make each time an app-update. How can I manage it, that the app is downloading the png-files from my webserver without knowing the name. I am looking for something which compares the files on the webserver with the files on the iPhone and if there is a new (or newer) file download it. Or is it possible to make an plist-file with the graphics??

    Read the article

  • Download content of the page using ajax jquery

    - by niao
    Greetings, how can I download some page content using ajax and jquery: I am doing something like that (2 versions inside one script): $("p").click(function() { $('#result').load('http://google.com'); $.ajax({ url='www.google.com', success: function(data) { $("result").html(data); alert('Load was performed.'); var url = 'www.wp.pl'; $('div#result').load(url); //var content = $.load(url); //alert(content); //$("#result").html("test"); } }); }); but it does not return any content

    Read the article

  • Download File from Web C++ (with winsock?)

    - by Lienau
    I need to download files/read strings from a specified url in C++. I've done some research with this, cURL seems to be the most popular method. Also, I've used it before in PHP. The problem with cURL is that the lib is huge, and my file has to be small. I think you can do it with winsock, but I can't find any simple examples. If you have a simple winsock example, a light cURL/Something else, or anything that could get the job done. I would greatly appreciated. Also, I need this to work with native C++.

    Read the article

  • download html source android?

    - by mars
    I'm trying to download a website source code and display it in a textbox but I seem to get an error and can't figure it out :s public void getHtml() throws ClientProtocolException, IOException { HttpClient httpClient = new DefaultHttpClient(); HttpContext localContext = new BasicHttpContext(); HttpGet httpGet = new HttpGet("http://www.spartanjava.com"); HttpResponse response = httpClient.execute(httpGet, localContext); String result = ""; BufferedReader reader = new BufferedReader( new InputStreamReader( response.getEntity().getContent() ) ); String line = null; while ((line = reader.readLine()) != null){ result += line + "\n"; Toast.makeText(activity.this, line.toString(), Toast.LENGTH_LONG).show(); } } how come this doesn't work and throw an IOException?

    Read the article

  • Downloading Spring Framework 3.2.x

    - by PM 77-1
    I'm looking at springsource community download site. It shows that 3.2.4 is the latest general release. Its zip file has dist suffix and the content is different than the latest in 3.1 branch 3.1.4 (which does not have dist ending). 3.1.4 has the following directories: dist projects src dist folder contains org.springframework...* jars. 3.2.4 has the following directories: docs libs schema lib folder contains spring-... jars Was there a major change between 3.1 and 3.2 releases? According to this accepted answer there was but I was not able to find anything about it. Does anybody have any particulars? Should I get 3.1.4 for now?

    Read the article

  • Does the Fast Video Download addon share my data?

    - by Frost Shadow
    I've installed the Fast Video Download version 3.0.8 addon for Firefox to download flash videos, like from youtube. What I'm wondering is, how does the addon download it, and do other people see that i'm downloading the videos? For example, is all the software to download the video already on my computer, or does the addon contact someone else to get the video, or let them know? Can the webpage's administrator see I'm downloading the video?

    Read the article

  • How to speed up WPF programs?

    - by Sam
    I love programming with and for Windows Presentation Framework. Mostly I write browser-like apps using WPF and XAML. But what really annoys me is the slowness of WPF. A simple page with only a few controls loads fast enough, but as soon as a page is a teeny weeny bit more complex, like containing a lot of data entry fields, one or two tab controls, and stuff, it gets painful. Loading of such a page can take more than one second. Seconds, indeed, especially on not so fast computers (read: the customers computers) it can take ages. Same with changing values on the page. Everything about the WPF UI is somehow sluggy. This is so mean! They give me this beautiful framework, but make it so excruciatingly slow so I'll have to apologize to our customers all the time! My Question: How do you speed up WPF? How do you profile bottlenecks? How do you deal with the slowness? Since this seems to be an universal problem with WPF, I'm looking for general advice, useful for many situations and problems. Some other related questions: What tools do you use for WPF development Tools to develop WPF or Silverlight applications

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    HI Gurus, I have been working on SSD(solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Esp these days i was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test 1. read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. 2. Read and write data from and into files located in SSD... 3. Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why? Thanx very much...If you know tips of ssd...follow me

    Read the article

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • Winforms: How to speed up Invalidate()?

    - by Pedery
    I'm developing a retained mode drawing application in GDI+. The application can draw simple shapes to a canvas and perform basic editing. The math that does this is optimized to the last byte and is not an issue. I'm drawing on a panel that is using the built-in Controlstyles.DoubleBuffer. Now, my problem arises if I run my app maximized on a big monitor (HD in my case). If I try to draw a line from one corner of the (big) canvas to the diagonally oposite other, it will start to lag and the CPU goes high up. Each graphical object in my app has a boundingbox. Thus, when I invalidate the boundingbox of a line that goes from one corner of the maximized app to the oposite diagonal one, that boundingbox is virtually as big as the canvas. When a user is drawing a line, this invalidation of the boundingbox thus happens on the mousemove event, and there is a clear lag visible. This lag also exists if the line is the only object on the canvas. I've tried to optimize this in many ways. If I draw a shorter line, the CPU and the lag goes down. If I remove the Invalidate() and keep all other code, the app is quick. If I use a Region (that only spans the figure) to invalidate instead of the boundingbox, it is just as slow. If I split the boundingbox into a range of smaller boxes that lie back to back, thus reducing the invalidation area, no visible performance gain can be seen. Thus I'm at a loss here. How can I speed up the invalidation? On a side note, both Paint.Net and Mspaint suffers from the same shortcommings. Word and PowerPoint however, seem to be able to paint a line as described above with no lag and no CPU load at all. Thus it's possible to achieve the desired results, the question is how?

    Read the article

  • AS3 microphone recording/saving works, in-flash PCM playback double speed

    - by Lowgain
    I have a working mic recording script in AS3 which I have been able to successfully use to save .wav files to a server through AMF. These files playback fine in any audio player with no weird effects. For reference, here is what I am doing to capture the mic's ByteArray: (within a class called AudioRecorder) public function startRecording():void { _rawData = new ByteArray(); _microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, _samplesCaptured, false, 0, true); } private function _samplesCaptured(e:SampleDataEvent):void { _rawData.writeBytes(e.data); } This works with no problems. After the recording is complete I can take the _rawData variable and run it through a WavWriter class, etc. However, if I run this same ByteArray as a sound using the following code which I adapted from the adobe cookbook: (within a class called WavPlayer) public function playSound(data:ByteArray):void { _wavData = data; _wavData.position = 0; _sound.addEventListener(SampleDataEvent.SAMPLE_DATA, _playSoundHandler); _channel = _sound.play(); _channel.addEventListener(Event.SOUND_COMPLETE, _onPlaybackComplete, false, 0, true); } private function _playSoundHandler(e:SampleDataEvent):void { if(_wavData.bytesAvailable <= 0) return; for(var i:int = 0; i < 8192; i++) { var sample:Number = 0; if(_wavData.bytesAvailable > 0) sample = _wavData.readFloat(); e.data.writeFloat(sample); } } The audio file plays at double speed! I checked recording bitrates and such and am pretty sure those are all correct, and I tried changing the buffer size and whatever other numbers I could think of. Could it be a mono vs stereo thing? Hope I was clear enough here, thanks!

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • PHP readfile() and large downloads

    - by Nirmal
    While setting up an online file management system, and now I have hit a block. I am trying to push the file to the client using this modified version of readfile: function readfile_chunked($filename,$retbytes=true) { $chunksize = 1*(1024*1024); // how many bytes per chunk $buffer = ''; $cnt =0; // $handle = fopen($filename, 'rb'); $handle = fopen($filename, 'rb'); if ($handle === false) { return false; } while (!feof($handle)) { $buffer = fread($handle, $chunksize); echo $buffer; ob_flush(); flush(); if ($retbytes) { $cnt += strlen($buffer); } } $status = fclose($handle); if ($retbytes && $status) { return $cnt; // return num. bytes delivered like readfile() does. } return $status; } But when I try to download a 13 MB file, it's just breaking at 4 MB. What would be the issue here? It's definitely not the time limit of any kind because I am working on a local network and speed is not an issue. The memory limit in PHP is set to 300 MB. Thank you for any help.

    Read the article

  • Why does a conditional not affect query speed?

    - by Telos
    I have a stored procedure that was taking a "long" period of time to execute. The query only needs to return data in one case, so I figured I could check for that case and just return before hitting the actual query. The only problem is that it still takes the same amount of time to execute with an if statement. I have verified that the code inside the if is not executing, and that if I replace the complex query with a simple select the speed is fine... so now I'm confused. Why is the query being slowed down by code that doesn't get executed when the conditional is false? Here's the query itself: ALTER PROCEDURE [dbo].[pr_cbc_GetCokeInfo] @pa_record int, @pb_record int AS BEGIN SET NOCOUNT ON; declare @ticketRec int SELECT @ticketRec = TicketRecord FROM eservice_live..v_sdticket where TicketRecord=@pa_record AND serviceCompanyID = 1139 AND @pb_record IS NULL if @ticketRec IS NULL return select record = null, doc_ref = @pa_record, memo_type = 'I', memo = 'Bottler: ' + isnull(Bottler, '') + ' ' + 'Sales Loc: ' + isnull(SalesLocation, '') + ' ' + 'Outlet Desc: ' + isnull(OutletDesc, '') + ' ' + 'City: ' + isnull(OutletCity, '') + ' ' + 'EquipNo: ' + isnull(EquipNo, '') + ' ' + 'SerialNo: ' + isnull(SerialNo, '') + ' ' + 'PhaseNo: ' + isnull(cast(PhaseNo as varchar(255)), '') + ' ' + 'StaticIP: ' + isnull(StaticIP, '') + ' ' + 'Air Card: ' + isnull(AirCard, '') FROM eservice_live..v_SDExtendedInfoField ef JOIN eservice_live..CokeSNList csl ON ef.valueText=csl.SerialNo where ef.docType='CLH' AND ef.docref = @ticketRec AND ef.ExtendedDocNumber=5 SET NOCOUNT OFF; END

    Read the article

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • Refactoring - Speed increase

    - by Michael G
    How can I make this function more efficient. It's currently running at 6 - 45 seconds. I've ran dotTrace profiler on this specific method, and it's total time is anywhere between 6,000ms to 45,000ms. The majority of the time is spent on the "MoveNext" and "GetEnumerator" calls. and example of the times are 71.55% CreateTableFromReportDataColumns - 18, 533* ms - 190 calls -- 55.71% MoveNext - 14,422ms - 10,775 calls What can I do to speed this method up? it gets called a lot, and the seconds add up: private static DataTable CreateTableFromReportDataColumns(Report report) { DataTable table = new DataTable(); HashSet<String> colsToAdd = new HashSet<String> { "DataStream" }; foreach (ReportData reportData in report.ReportDatas) { IEnumerable<string> cols = reportData.ReportDataColumns.Where(c => !String.IsNullOrEmpty(c.Name)).Select(x => x.Name).Distinct(); foreach (var s in cols) { if (!String.IsNullOrEmpty(s)) colsToAdd.Add(s); } } foreach (string col in colsToAdd) { table.Columns.Add(col); } return table; }

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • speed up sql INSERTs

    - by sean717
    I have the following method to insert millions of rows of data into a table (I use SQL 2008) and it seems slow, is there any way to speed up INSERTs? Here is the code snippet - I use MS enterprise library public void InsertHistoricData(List<DataRow> dataRowList) { string sql = string.Format( @"INSERT INTO [MyTable] ([Date],[Open],[High],[Low],[Close],[Volumn]) VALUES( @DateVal, @OpenVal, @High, @Low, @CloseVal, @Volumn )"); DbCommand dbCommand = VictoriaDB.GetSqlStringCommand( sql ); DB.AddInParameter(dbCommand, "DateVal", DbType.Date); DB.AddInParameter(dbCommand, "OpenVal", DbType.Currency); DB.AddInParameter(dbCommand, "High", DbType.Currency ); DB.AddInParameter(dbCommand, "Low", DbType.Currency); DB.AddInParameter(dbCommand, "CloseVal", DbType.Currency); DB.AddInParameter(dbCommand, "Volumn", DbType.Int32); foreach (NasdaqHistoricDataRow dataRow in dataRowList) { DB.SetParameterValue( dbCommand, "DateVal", dataRow.Date ); DB.SetParameterValue( dbCommand, "OpenVal", dataRow.Open ); DB.SetParameterValue( dbCommand, "High", dataRow.High ); DB.SetParameterValue( dbCommand, "Low", dataRow.Low ); DB.SetParameterValue( dbCommand, "CloseVal", dataRow.Close ); DB.SetParameterValue( dbCommand, "Volumn", dataRow.Volumn ); DB.ExecuteNonQuery( dbCommand ); } }

    Read the article

  • Which torrent client has command line arguments to start/stop downloads?

    - by virpara
    first of all, I want to create shell script to start/stop downloads in torrent client. I don't need CLI but if you know how I can do that with CLI using shell script then it is okay. I use jDownloader which is GUI based application but has some command line arguments as below which I use to start/stop download. -h/--help Show this help message -a/--add-link(s) Add links -co/--add-container(s) Add containers -d/--start-download Start download -D/--stop-download Stop download -H/--hide Don't open Linkgrabber when adding Links -m/--minimize Minimize download window -f/--focus Get jD to foreground/focus -s/--show Show JAC prepared captchas -t/--train Train a JAC method -r/--reconnect Perform a Reconnect -C/--captcha <filepath or url> <method> Get code from image using JAntiCaptcha -p/--add-password(s) Add passwords -n --new-instance Force new instance if another jD is running So I can easily start/stop download as follows, jdownloader --start-download jdownloader --stop-download now I want torrent client to do that through shell script.

    Read the article

  • Is there an apt command to download a deb file to the current directory?

    - by Lekensteyn
    I am often interested in the installation triggers (postinst, postrm) or certain parts (/usr/share or /etc) of packages. Currently, I am running the next command to retrieve the source code: apt-get source [package-name] The downside is, this file is often much bigger than the binary package and does not reflect the installation tree. Right now, I am downloading the packages through http://packages.ubuntu.com/: Search for [package-name] Select the package Click on amd64/i386 for download Download the actual file This takes too long for me and as someone who really likes the shell, I would like to do something like the next (imaginary) command: apt-get get-deb-file [package-name] I could not find something like this in the apt-get manual page. The most close I found was the --download-only switch, but this puts the package in /var/cache/apt/archives (which requires root permissions) and not in the current directory.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >