Search Results

Search found 9286 results on 372 pages for 'transfer speed'.

Page 50/372 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • AS3 microphone recording/saving works, in-flash PCM playback double speed

    - by Lowgain
    I have a working mic recording script in AS3 which I have been able to successfully use to save .wav files to a server through AMF. These files playback fine in any audio player with no weird effects. For reference, here is what I am doing to capture the mic's ByteArray: (within a class called AudioRecorder) public function startRecording():void { _rawData = new ByteArray(); _microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, _samplesCaptured, false, 0, true); } private function _samplesCaptured(e:SampleDataEvent):void { _rawData.writeBytes(e.data); } This works with no problems. After the recording is complete I can take the _rawData variable and run it through a WavWriter class, etc. However, if I run this same ByteArray as a sound using the following code which I adapted from the adobe cookbook: (within a class called WavPlayer) public function playSound(data:ByteArray):void { _wavData = data; _wavData.position = 0; _sound.addEventListener(SampleDataEvent.SAMPLE_DATA, _playSoundHandler); _channel = _sound.play(); _channel.addEventListener(Event.SOUND_COMPLETE, _onPlaybackComplete, false, 0, true); } private function _playSoundHandler(e:SampleDataEvent):void { if(_wavData.bytesAvailable <= 0) return; for(var i:int = 0; i < 8192; i++) { var sample:Number = 0; if(_wavData.bytesAvailable > 0) sample = _wavData.readFloat(); e.data.writeFloat(sample); } } The audio file plays at double speed! I checked recording bitrates and such and am pretty sure those are all correct, and I tried changing the buffer size and whatever other numbers I could think of. Could it be a mono vs stereo thing? Hope I was clear enough here, thanks!

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Why does a conditional not affect query speed?

    - by Telos
    I have a stored procedure that was taking a "long" period of time to execute. The query only needs to return data in one case, so I figured I could check for that case and just return before hitting the actual query. The only problem is that it still takes the same amount of time to execute with an if statement. I have verified that the code inside the if is not executing, and that if I replace the complex query with a simple select the speed is fine... so now I'm confused. Why is the query being slowed down by code that doesn't get executed when the conditional is false? Here's the query itself: ALTER PROCEDURE [dbo].[pr_cbc_GetCokeInfo] @pa_record int, @pb_record int AS BEGIN SET NOCOUNT ON; declare @ticketRec int SELECT @ticketRec = TicketRecord FROM eservice_live..v_sdticket where TicketRecord=@pa_record AND serviceCompanyID = 1139 AND @pb_record IS NULL if @ticketRec IS NULL return select record = null, doc_ref = @pa_record, memo_type = 'I', memo = 'Bottler: ' + isnull(Bottler, '') + ' ' + 'Sales Loc: ' + isnull(SalesLocation, '') + ' ' + 'Outlet Desc: ' + isnull(OutletDesc, '') + ' ' + 'City: ' + isnull(OutletCity, '') + ' ' + 'EquipNo: ' + isnull(EquipNo, '') + ' ' + 'SerialNo: ' + isnull(SerialNo, '') + ' ' + 'PhaseNo: ' + isnull(cast(PhaseNo as varchar(255)), '') + ' ' + 'StaticIP: ' + isnull(StaticIP, '') + ' ' + 'Air Card: ' + isnull(AirCard, '') FROM eservice_live..v_SDExtendedInfoField ef JOIN eservice_live..CokeSNList csl ON ef.valueText=csl.SerialNo where ef.docType='CLH' AND ef.docref = @ticketRec AND ef.ExtendedDocNumber=5 SET NOCOUNT OFF; END

    Read the article

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • Refactoring - Speed increase

    - by Michael G
    How can I make this function more efficient. It's currently running at 6 - 45 seconds. I've ran dotTrace profiler on this specific method, and it's total time is anywhere between 6,000ms to 45,000ms. The majority of the time is spent on the "MoveNext" and "GetEnumerator" calls. and example of the times are 71.55% CreateTableFromReportDataColumns - 18, 533* ms - 190 calls -- 55.71% MoveNext - 14,422ms - 10,775 calls What can I do to speed this method up? it gets called a lot, and the seconds add up: private static DataTable CreateTableFromReportDataColumns(Report report) { DataTable table = new DataTable(); HashSet<String> colsToAdd = new HashSet<String> { "DataStream" }; foreach (ReportData reportData in report.ReportDatas) { IEnumerable<string> cols = reportData.ReportDataColumns.Where(c => !String.IsNullOrEmpty(c.Name)).Select(x => x.Name).Distinct(); foreach (var s in cols) { if (!String.IsNullOrEmpty(s)) colsToAdd.Add(s); } } foreach (string col in colsToAdd) { table.Columns.Add(col); } return table; }

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • speed up sql INSERTs

    - by sean717
    I have the following method to insert millions of rows of data into a table (I use SQL 2008) and it seems slow, is there any way to speed up INSERTs? Here is the code snippet - I use MS enterprise library public void InsertHistoricData(List<DataRow> dataRowList) { string sql = string.Format( @"INSERT INTO [MyTable] ([Date],[Open],[High],[Low],[Close],[Volumn]) VALUES( @DateVal, @OpenVal, @High, @Low, @CloseVal, @Volumn )"); DbCommand dbCommand = VictoriaDB.GetSqlStringCommand( sql ); DB.AddInParameter(dbCommand, "DateVal", DbType.Date); DB.AddInParameter(dbCommand, "OpenVal", DbType.Currency); DB.AddInParameter(dbCommand, "High", DbType.Currency ); DB.AddInParameter(dbCommand, "Low", DbType.Currency); DB.AddInParameter(dbCommand, "CloseVal", DbType.Currency); DB.AddInParameter(dbCommand, "Volumn", DbType.Int32); foreach (NasdaqHistoricDataRow dataRow in dataRowList) { DB.SetParameterValue( dbCommand, "DateVal", dataRow.Date ); DB.SetParameterValue( dbCommand, "OpenVal", dataRow.Open ); DB.SetParameterValue( dbCommand, "High", dataRow.High ); DB.SetParameterValue( dbCommand, "Low", dataRow.Low ); DB.SetParameterValue( dbCommand, "CloseVal", dataRow.Close ); DB.SetParameterValue( dbCommand, "Volumn", dataRow.Volumn ); DB.ExecuteNonQuery( dbCommand ); } }

    Read the article

  • Very very slow transfer speeds between Windows 7 and samba server running on Ubuntu 11.10/12.04 minimal

    - by kuzyt
    As mentioned in the title I tried transferring files between Windows 7 and the samba server running on both Ubuntu 11.10 and 12.04 but both showed very slow transfer speeds. Can someone please guide me in the right direction to debug this problem ? wget --output-document=/dev/null http://tokyo1.linode.com/100MB-tokyo.bin --2012-08-21 22:02:17-- http://tokyo1.linode.com/100MB-tokyo.bin Resolving tokyo1.linode.com (tokyo1.linode.com)... 106.187.33.12 Connecting to tokyo1.linode.com (tokyo1.linode.com)|106.187.33.12|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: `/dev/null' 8% [=============> ] 8,923,980 64.8K/s eta 15m 0s wlan0 IEEE 802.11abgn ESSID:"TNET" Mode:Managed Frequency:2.462 GHz Access Point: 58:6D:8F:26:20:7A Bit Rate=117 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=57/70 Signal level=-53 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:101 Invalid misc:2448 Missed beacon:0 03:00.0 Network controller: Atheros Communications Inc. AR9300 Wireless LAN adaptor (rev 01) Subsystem: Atheros Communications Inc. Device 3112 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at fea00000 (64-bit, non-prefetchable) [size=128K] Expansion ROM at fea20000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable- Count=1/4 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [70] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <2us, L1 <64us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [300 v1] Device Serial Number 00-00-00-00-00-00-00-00 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • How is jQuery so fast?

    - by ClarkeyBoy
    Hey, I have a rather large application which, on the admin frontend, takes a few seconds to load a page because of all the pageviews that it has to load into objects before displaying anything. Its a bit complex to explain how the system works, but a few of my other questions explains the system in great detail. The main difference between what they say and the current system is that the customer frontend no longer loads all the pageviews into objects when a customer first views the page - it simply adds the pageview to the database and creates an object in an unsynchronised list... to put it simply, when a customer views a page it no longer loads all the pageviews into objects; but the admin frontend still does. I have been working on some admin tools on the customer frontend recently, so if an administrator clicks the description of an item in the catalogue then the right hand column will display statistics and available actions for the selected item. To do this the page which gets loaded (through $('action-container').load(bla bla bla);) into the right hand column has to loop through ALL the pageviews - this ultimately means that ALL the pageviews are loaded into objects if they haven't been already. For some reason this loads really REALLY fast. The difference in speed is only like a second on my dev site, but the live site has thousands of pageviews so the difference is quite big... So my question is: why is it that the admin frontend loads so slowly while using $(bla).load(bla); is so fast? I mean whatever method jQuery uses, can't browsers use this method too and load pages super-fast? Obviously not as someone would've done that by now - but I am interested to know just why the difference is so big... is it just my system or is there a major difference in speed between the browser getting a page and jQuery getting a page? Do other people experience the same kind of differences? Thanks in advance, Regards, Richard

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • asp.net mvc postback

    - by user266909
    I have a controller with the following two Edit methods. The edit form displays correctly with all additional dropdown lists from the FormViewModel. However, when I changed some field values and submitted the form. None of the changed fields were saved. The fields in the postbask collection have default or null values. I have another edit form which update another table. On submit, the changed values are saved. Does anyone know why? // GET: /Transfers/Edit/5 public ActionResult Edit(int id) { Transfer transfer = myRepository.GetTransfer(id); if (transfer == null) return View("NotFound"); return View(new TransferFormViewModel(transfer)); } // // POST: /Transfers/Edit/5 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, Transfer collection) { Transfer transfer = vetsRepository.GetTransfer(id); if (transfer == null) return View("NotFound"); else { try { UpdateModel(transfer); vetsRepository.Save(); return RedirectToAction("Details", new { id = transfer.TransfersID }); } catch { ModelState.AddModelErrors(transfer.GetRuleViolations()); return View(new TransferFormViewModel(transfer)); } } }

    Read the article

  • USB Drive Not recognized

    - by user36582
    My Friend's Pen Drive, which was working well very well just few days ago, is not being recognized after getting used by a virus affected machine. Its not on fdisk -l or lsusb However in dmesg I can see the following: [ 977.300013] usb 5-2: new full speed USB device using uhci_hcd and address 2 [ 977.420014] usb 5-2: device descriptor read/64, error -71 [ 977.644023] usb 5-2: device descriptor read/64, error -71 [ 977.860013] usb 5-2: new full speed USB device using uhci_hcd and address 3 [ 977.980013] usb 5-2: device descriptor read/64, error -71 [ 978.204013] usb 5-2: device descriptor read/64, error -71 [ 978.420013] usb 5-2: new full speed USB device using uhci_hcd and address 4 [ 978.828015] usb 5-2: device not accepting address 4, error -71 [ 978.940015] usb 5-2: new full speed USB device using uhci_hcd and address 5 [ 979.348013] usb 5-2: device not accepting address 5, error -71 [ 979.348292] hub 5-0:1.0: unable to enumerate USB device on port 2 [ 1017.848015] usb 5-2: new full speed USB device using uhci_hcd and address 6 [ 1017.968012] usb 5-2: device descriptor read/64, error -71 [ 1018.192017] usb 5-2: device descriptor read/64, error -71 [ 1018.408014] usb 5-2: new full speed USB device using uhci_hcd and address 7 [ 1018.528012] usb 5-2: device descriptor read/64, error -71 [ 1018.752023] usb 5-2: device descriptor read/64, error -71 [ 1018.968012] usb 5-2: new full speed USB device using uhci_hcd and address 8 [ 1019.376019] usb 5-2: device not accepting address 8, error -71 [ 1019.488011] usb 5-2: new full speed USB device using uhci_hcd and address 9 [ 1019.896016] usb 5-2: device not accepting address 9, error -71 [ 1019.896308] hub 5-0:1.0: unable to enumerate USB device on port 2 [ 1049.984016] usb 5-1: new full speed USB device using uhci_hcd and address 10 [ 1050.104014] usb 5-1: device descriptor read/64, error -71 [ 1050.328014] usb 5-1: device descriptor read/64, error -71 [ 1050.544014] usb 5-1: new full speed USB device using uhci_hcd and address 11 [ 1050.664018] usb 5-1: device descriptor read/64, error -71 [ 1050.888019] usb 5-1: device descriptor read/64, error -71 [ 1051.104025] usb 5-1: new full speed USB device using uhci_hcd and address 12 [ 1051.512014] usb 5-1: device not accepting address 12, error -71 [ 1051.624101] usb 5-1: new full speed USB device using uhci_hcd and address 13 [ 1052.032014] usb 5-1: device not accepting address 13, error -71 [ 1052.032991] hub 5-0:1.0: unable to enumerate USB device on port 1 What these Errors actually mean and how Can I get this pen drive Back to work ??

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • ACPI Throttling in Ubuntu

    - by Evan
    I'm looking to throttle my cpu through the ACPI. I've read up on it, but I keep receiving permission denied statements. I have 8 available throttling states. Here are the outcomes of my atttempts: evan@evan-laptop:/proc/acpi/processor/CPU0$ echo 3 /proc/acpi/processor/CPU0/throttling bash: /proc/acpi/processor/CPU0/throttling: Permission denied evan@evan-laptop:/proc/acpi/processor/CPU0$ sudo echo 3 /proc/acpi/processor/CPU0/throttling bash: /proc/acpi/processor/CPU0/throttling: Permission denied EDIT: For reference, I am running Ubuntu Karmic with Intel Core Duo T2500 with ACPI enabled

    Read the article

  • Which Linux is the most efficient?

    - by quandary
    Simple question: There is a gazillion Linux distributions out there. Which one (distribution/incl. window manager) makes (technically) the most efficient use of my (aging) computer ? I have appx. 1 GB RAM and a 1.6 GHz processor, 120 GB hd. I develop applications (C++/.NET/mono/ASP/PostGre SQL/). Usually, I prefer distros with apt-get. Anybody knows which one takes the most care of my limited RAM, and wich one is the fastest/slimmest of them all, that has a decent repo and is damn fast)

    Read the article

  • Bluray Drives: 2x vs 4x vs 6x vs 8x read/writespeed.

    - by Wesley
    Hi all, I couldn't find a duplicate question, but I was wondering what the differences are between different read/write speeds for Bluray drive. I'm planning on buying one for a build but don't know if I can cheap out on getting a Bluray 2x drive or spend more money for a quality Bluray 8x drive. Will I just experience more lag/buffering times for Bluray discs on a 2x and none for a 6x or 8x? Thanks in advance.

    Read the article

  • Our wi-fi at work is ridiculously slow, will adding more range extenders improve it?

    - by john
    At work, we have two wireless networks (e.g., Work1 Work2); the Work2 is used downstairs and Work1 is used upstairs. However, both are notoriously slow. The connection is better when we are wired in, but unfortunately due to our building being very old and our company growing very fast, most employees are not seated near the walls where the ethernet cables are. I had Cox, our ISP, run a bandwidth utilization test and it doesn't seem like we are capping out on upstream/downstream, which leads me to believe that it's strictly an issue with the wireless networks (which were implemented before I got there). The wireless networks are both Apple Airport Extremes. Is there anything I can do to improve the situation for everyone? Speeds are extremely slow, and sometimes drops out.

    Read the article

  • Do different operating systems have different read and write speeds?

    - by Ivan
    If I have two different operating systems, such as Windows 8 and Ubuntu, running on the same hardware, will the two operating systems have different read and write speeds? My guess is that there would be minimal difference between operating systems and read and write speeds to the hard disk since the major limited factor is seeking; however, different operating systems may use different file systems in order to attempt to reduce seek time in the hard disk. Likewise, I'm sure that modern operating systems will not actually write directly to the hard disk, and instead will just have it in memory and marked with a dirty bit. Are there any studies that show differences in read and write speeds between OSs? Or would the file system being used by the OS matter more than the OS itself?

    Read the article

  • Any rerefence of CPU world statistics?

    - by Áxel Costas Pena
    I am looking for any referencee about computer power statistics across the world. My main interest is about real computing capabilities, so I'd prefer information about real processor power, and even best if it includes also other critical hardware statistics, like RAM memory, but if it isn't possible, maybe statistics about brand/model distribution will be also useful. I've Googled for some minutes and I've found nothing related.

    Read the article

  • How to set expiration date for external files? [closed]

    - by garconcn
    I have a site included lots of external files, most of them are gif format. I have no control on the external files, but have to use them(with permission). When I check the site using Google Pagespeed, I got very low score(31) even though the page load is fast. One of the high priority suggestion is to leverage browser caching by setting an expiration date. However, all the files are on external links. I have already set the expiration date for local files.

    Read the article

  • Cisco ASA 5505 and slow download speeds for Apple devices

    - by James
    For traffic routing through my ASA 5505, downloads for all Apple devices, including AppleTV iPad gen 1 IMac MacBook Pro are very slow. speedof.me show less than 1 Mbps download (where I should have 20 Mbps +), yet for any Windows-based device, the download speeds are in excess of 20 Mbps. The Windows device, including the iMac and MacBook Pro machines, are connected via ethernet cable. Why are Apple devices experiencing such pain? Is it an ASA setting, or something else? Thanks.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >