Search Results

Search found 23762 results on 951 pages for 'network speed'.

Page 130/951 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • How can a large number of developers write software without either a cumbersome process or poor qual

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • Search relevance from XML docs (XQuery?) vs MySQL

    - by Marius
    Hello there, I have a website where documents are saved in xml documents, all with the same structure. I need a search engine where I am able to choose documents with the highest relevance according to the key words given by a searching user. I thought it could (?) be a good idea to have one using XQuery rather than having the information stored twice (in XML docs + mysql database) and querying the mysql database for relevance searches. Is XQuery any good for this, and how, and what speed can I expect on +1000 documents of about 7kb each. Thank you for your time. Kind regards

    Read the article

  • Building SL4 + RIAServices app takes too long on VS2010.

    - by adlanelm
    Got a Win7 box with VS2010 Premium installed on it. Building desktop apps works just fine. But we got this solution with 15 SL4 and 21 desktop projects... Building the SL part of it takes too long. This is very irritating and encourages to drop TDD since every time I run a test it takes ~3 seconds for msbuild to find out that nothing changed and the project should be skipped. The projects are very small and there's nothing fancy in them and we hadn't any problems before we switched from VS2008+SL3. I've heard people complaining abound VS2010 speed in general, but nothing about SL4 build time. Is anyone experiencing same problems and is there any workaround for this?

    Read the article

  • Is scala functional programming slower than traditional coding?

    - by Fred Haslam
    In one of my first attempts to create functional code, I ran into a performance issue. I started with a common task - multiply the elements of two arrays and sum up the results: var first:Array[Float] ... var second:Array[Float] ... var sum=0f; for(ix<-0 until first.length) sum += first(ix) * second(ix); Here is how I reformed the work: sum = first.zip(second).map{ case (a,b) => a*b }.reduceLeft(_+_) When I benchmarked the two approaches, the second method takes 40 times as long to complete! Why does the second method take so much longer? How can I reform the work to be both speed efficient and use functional programming style?

    Read the article

  • PHP image resize on the fly vs storing resized images

    - by Pablo
    I'm building a image sharing site and would like to know the pros and cons of resizing images on the fly with php and having the resized images stored. Which is faster? Which is more reliable? how big is the gap between the two methods in speed and performance? Please note that either way the images go through a PHP script for statistics like views or if hotlinking is allow etc... so is not like it will be a direct link for images if i opt to store the resize images. I'll appreciated your comments or any helpful links on the subject, Thanks.

    Read the article

  • Query total page count via SNMP HP Laserjet

    - by Tim
    I was asked to get hold of the total pages counts for the 100+ printers we have at work. All of them are HP Laser or Business Jets of some description and the vast majority are connected via some form of HP JetDirect network card/switch. After many hours of typing in IP addresses and copying and pasting the relevant figure in to Excel I have now been asked to do this on a weekly basis. This led me to think there must be an easier way, as an IT professional I can surely work out some time saving method to solve this issue. Suffice it to say I do not feel very professional now after a day or so of trying to make SNMP work for me! From what I understand the first thing is to enable SNMP on the printer. Done. Next I would need something to query the SNMP bit. I decided to go open source and free and someone here recommended net-snmp as a decent tool (I would like to have just added the printers as nodes in SolarWinds but we are somewhat tight on licences apparently). Next I need the name of the MIB. For this I believe the HP-LASERJET-COMMON-MIB has the correct information in it. Downloaded this and added to net-snmp. Now I need the OID which I believe after much scouring is printed-media-simplex-count (we have no duplex printers, that we are interested in at least). Running the following command yields the following demoralising output: snmpget -v 2c -c public 10.168.5.1 HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1 (the OID was derived from running: snmptranslate -IR -On printed-media-simplex-count Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt .1.3.6.1.2.1.1.16.1.1.1 ) Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1: Am I barking up the wrong tree completely with this? My aim was to script it all to output to a file for all the IP addresses of the printers and then plonk that in Excel for my lords and masters to digest at their leisure. I have a feeling I am using either the wrong MIB or the wrong OID from said MIB (or both). Does anyone have any pointers on this for me? Or should I give up and go back to navigationg each printers web page individually (hoping not).

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • XP shared folders not accessible after BIOS changed

    - by stijn
    Here's what worked for over a year: PC A runs Windows 7, PC B runs Windows XP. Both are on the same subnet behind a router. A uses user account X, but logs in to PC B using the Administrator account. PC B is a Dell Precision 470. A known problem with these is that sometimes when plugging in their power cable they somehow loses all BIOS settings. This happened yesterday. After this happens Windows won't boot, because the default BIOS setting is 'RAID ON' while there is no RAID configured. No problem though, changing the BIOS settings to 'RAID OFF' makes it boot without problems. Note that in the meantime, nothing config-related was changed on machine A. It wasn't even on. Indeed after doing this, everything is fine. Everything includes all normal operations, remote desktop from PC A to PC B, running Synergy between A and B, accessing shared folders from B to A. But accessing the shared folders on B from A does not work any more. I tried pretty much everything I found via Google (fiddling with policies/registry kes/...) but no avail. > ping -a 192.168.2.2 Pinging A [192.168.2.2] with 32 bytes of data: Reply from 192.168.2.2: bytes=32 time<1ms TTL=128 > net view \\192.168.2.2 System error 5 has occurred. Access is denied. > net use /persistent:no K: \\A\myshare /user:A\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:192.168.2.2\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:USERNAME PASSWORD System error 86 has occurred. The specified network password is not correct. A solution to this would be great: I haven't been able to do any work since yesterday ;] update after taking the hard drive out of B and putting it in another Precision 470 with almost exactly the same hardware (at first sight, only the video card differs) the shared folders work.. Putting the disk back into A, same problem remains. Why does this depend on hardware, and more important, on which hardware?

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • Boosting my GA with Neural Networks and/or Reinforcement Learning

    - by AlexT
    As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze. That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as 1 0 0 1 0 1 0 1 0 1 1 1 0 0... How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome? In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me: (from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm) 1. Set parameter , and environment reward matrix R 2. Initialize matrix Q as zero matrix 3. For each episode: * Select random initial state * Do while not reach goal state o Select one among all possible actions for the current state o Using this possible action, consider to go to the next state o Get maximum Q value of this next state based on all possible actions o Compute o Set the next state as the current state End Do End For The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm? If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.

    Read the article

  • In Rails, how to speed up machinist tests?

    - by Bryan Shen
    I'm replacing test fixtures with Machinist. But using Machinist to set up test data is very slow, because whenever a test method is run some new data are made by Machinist and saved to database. Is there any way to cache the data in memory so that using Machinist isn't so slow? Thanks, Bryan

    Read the article

  • Speed Problem - Animating Height Change in UITableView

    - by Travis
    So I have a huge UITableView, between 1000 and 3000 rows. Each row needs to, when selected, expand to include several buttons. I do this by rendering the buttons below the cell, enabling clipping on the cells, and then just animating a change in height when they're selected. So I have heightForRowAtIndex checking if the row is selected, if so it gives a different height value. To do the animation, I have the following: - (void) tableView: (UITableView *) tableView didSelectRowAtIndexPath: (NSIndexPath *) indexPath { [self.tableView beginUpdates]; [self.tableView endUpdates]; } This all works wonderfully until I get around 1000 rows, at which point the animations still work and are still quite smooth, but they lag by a second or two. My first thought was a memory problem, but everything is being autoreleased and upon inspection my application isn't even receiving a memory warning message. So...any ideas on what this might be and how I can erase this lag between selection and change in cell height? Oh, and one last thing - right now I'm testing this on the iPad, so if this is a problem there, I expect it to be worse on our other target devices (iPhone and iPod).

    Read the article

  • Home network with Windows 7 as router

    - by Michael
    Background: I have tried to use routers, but so far all of them can't handle the bandwidth, number of connections eventually limited by the hardware resources, so overall the home routers are decreasing the internet speed. I went through DD-WRT and stuff like that. Question: What I want is to use my Windows7 PC as router. It has 2 LAN cards. I'm going to connect to this router another desktop 2 pcs and notebook through wireless router. The main question is what is the most efficient way to turn this Windows7 box(and I need Windows for native NTFS support) into router with NAT/Routing/Firewall functionality? Is there any routing software recommended for this purpose or I should just use windows native "Internet Sharing"? I'm going to run SIP phones in the LAN, so I need friendly NAT(Full cone perhaps). Also I'm going to have FTP server on that Windows7 "server" PC. As firewall I'm thinking about Comodo. Need to drop all incoming, unless explicitly allowed.

    Read the article

  • Handler invocation speed: Objective-C vs virtual functions

    - by Kerido
    I heard that calling a handler (delegate, etc.) in Objective-C can be even faster than calling a virtual function in C++. Is it really correct? If so, how can that be? AFAIK, virtual functions are not that slow to call. At least, this is my understanding of what happens when a virtual function is called: Compute the index of the function pointer location in vtbl. Obtain the pointer to vtbl. Dereference the pointer and obtain the beginning of the array of function pointers. Offset (in pointer scale) the beginning of the array with the index value obtained on step 1. Issue a call instruction. Unfortunately, I don't know Objective-C so it's hard for me to compare performance. But at least, the mechanism of a virtual function call doesn't look that slow, right? How can something other than static function call be faster?

    Read the article

  • UART speed possibly wrong

    - by Mike
    My brain is fried, so I thought I would pass this one to the community. When sending 1 character to my embedded system, it consistently thinks it receives 2 characters. The first received character seems to map to the transmitted character (in some unkown way) and the second received character is always 0xff Here is what I observed: Tx char (in hex) Rx character (in hex), I left out the second byte (always ff) 31 9D 32 9B 33 99 61 3D 62 3B 63 39 64 37 65 35 41 7D 42 7B 43 79 I have check my clocks and them seem to be ok. The only diff between this non working version and the previous version is that i am now using a RS485 chip. I have traced the signal all the way up to the MCU and it looks fine (confirmed the bit value on the rx pin)

    Read the article

  • speed: XmlTextReader vs LinqtoXml

    - by Michel
    Hi, i'm about to read some xml (who isn't :-)) This time however it's a lot of data: about 30.000 records with 5 properties, all in one file. Till now i've always read that the XmlTextReader is the fastest way to read xml data, but now there also is the (nice sytax of) linqtoXml. Does anybody know any performance issues, or that there aren't any, with LinqToXml? Michel

    Read the article

  • Improve speed of own debug visualizer for Delphi 2010

    - by netcodecz
    I wrote Delphi debug visualizer for TDataSet to display values of current row, source + screenshot: http://delphi.netcode.cz/text/tdataset-debug-visualizer.aspx . Working good, but very slow. I did some optimalization (how to get fieldnames) but still for only 20 fields takes 10 seconds to show - very bad. Main problem seems to be slow IOTAThread90.Evaluate used by main code shown below, this procedure cost most of time, line with ** about 80% time. FExpression is name of TDataset in code. procedure TDataSetViewerFrame.mFillData; var iCount: Integer; I: Integer; // sw: TStopwatch; s: string; begin // sw := TStopwatch.StartNew; iCount := StrToIntDef(Evaluate(FExpression+'.Fields.Count'), 0); for I := 0 to iCount - 1 do begin s:= s + Format('%s.Fields[%d].FieldName+'',''+', [FExpression, I]); // FFields.Add(Evaluate(Format('%s.Fields[%d].FieldName', [FExpression, I]))); FValues.Add(Evaluate(Format('%s.Fields[%d].Value', [FExpression, I]))); //** end; if s<> '' then Delete(s, length(s)-4, 5); s := Evaluate(s); s:= Copy(s, 2, Length(s) -2); FFields.CommaText := s; { sw.Stop; s := sw.Elapsed; Application.MessageBox(Pchar(s), '');} end; Now I have no idea how to improve performance.

    Read the article

  • Iteration speed of int vs long

    - by jqno
    I have the following two programs: long startTime = System.currentTimeMillis(); for (int i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); and long startTime = System.currentTimeMillis(); for (long i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); Note: the only difference is the type of the loop variable (int and long). When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N. So why is this? I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well? A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.

    Read the article

  • R: ESS shell.exec speed

    - by Musa
    I am using ESS in Windows XP. I have noticed that shell.exec is much slower with ESS than with RGui (the problem occurs when I try help(ls) for example, the help is displayed much faster in RGui, I tracked it down and it is due to shell.exec). Is there any reason for this? How can I fix it? My default browser is Firefox.

    Read the article

  • Would it be possible to speed up Android Emulator by removing unnecessary apps\

    - by Stan
    I am using Android SDK 1.6 and developing some simple apps. It seems everytime Android Emulator loads every default apps, like message music browser etc... I guess this cause the booting process slow (takes 1 minute overhead for me to test the code every time). Would it be possible to take these apps out and just have my apps on the emulator? My purpose is to have a faster boot up time on emulator. Thanks.

    Read the article

  • Speed Difference between native OLE DB and ADO.NET

    - by weijiajun
    I'm looking for suggestions as well as any benchmarks or observations people have. We are looking to rewrite our data access layer and are trying to decide between native C++ OLEDB or ADO.NET for connecting with databases. Currently we are specifically targeting Oracle which would mean we would use the Oracle OLE DB provider and the ODP.NET. Requirements: 1. All applications will be in managed code so using native C++ OLEDB would require C++/CLI to work (no PInvoke way to slow). 2. Application must work with multiple databases in the future, currently just targeting Oracle specifically. Question: 1. Would it be more performant to use ADO.NET to accomplish this or use native C++ OLE DB wrapped in a Managed C++ interface for managed code to access? Any ideas, or help or places to look on the web would be greatly appreciated.

    Read the article

  • How to speed up loading the splash screen.

    - by AngryHacker
    I am optimizing the startup of a WinForms app. One issue I identified is the loading of the splash screen form. It takes about half a second to a second. I know that multi-threading is a no-no on UI pieces, however, seeing how the splash screen is a fairly autonomous piece of the application, is it possible to somehow mitigate its performance hit by throwing it one some other thread (perhaps in the way Chrome does it), so that the important pieces of the application can actually get going.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >