Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 90/184 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • IIS slow response

    - by Martin Ševic
    I have developed ASP.NET 4.5 application which take infos about sensors from sqlite database every 3 seconds. This application runs nice on my local develop machine on IIS Express server. I have created virtual machine (4x 3,25 GHz CPU; 6GB RAM) where i have installed Windows Server 2012 and IIS 8 service in order to test application on real server because we will run it on production machine later. After installing VC++ 2010 x64 and VC++ 2010 x86 and set "Enable 32-bit application" to true in application pool website started to work but there is a large problem with response time. There is a for example 10 seconds delay before page loads. CPU utillization is about 10% and RAM about 1,5GB. I am new to configuring IIS server so i want to ask if there is some tip how to make it faster. I am sure, there will be some twist which will make it normal work. Many thanks.

    Read the article

  • CMD file time not always matching windows explorer file time

    - by skyrail
    I have a set of file I need to set the created, modified and last access date to exif date taken value, after a copy between 2 folders (might be fat32 on memory card or ntfs on fixed or usb disk). When I copy a file, the date and time switch to the current date. Then I change all 3 dates manually, either with change attributes in windows explorer or far manager on the command line. To make it faster I wrote a batch script getting original file dates (with php and function stat), building a batch script that invoke nircmd setfiletime for each file. Then I apply this batch to the copied version. The operation is relatively fast and reliable. Unfortunately, a bunch of files have last access and created time different in cmd and windows explorer (1H difference). Very strangely, it happens with dates between november and february, which make the operation unreliable. Why is this happening, and how can I fix it?

    Read the article

  • How can I use the proxy settings on Epic privacy browser to log on to Facebook?

    - by EddieN120
    I love the Epic privacy browser because it is built from the ground up to enhance privacy. It's built on Chromium but because it has stripped out all code that tracks users across the Internet, pages load faster and things work snappier. With one click you can enable a proxy to hide your IP address, sort of like Chrome's "Incognito" mode on steroids. But there's a problem: if I load Epic, go to facebook.com, log in, and then click the proxy button, I can use Facebook for a while. But eventually, Facebook would throw up an error screen, saying that it thinks that my account has been hacked, and then it would make me verify my identity, force me to change my password, etc. I've had to change my password four times in as many days, which is very annoying. Now I turn on the proxy for browsing on to every other site but Facebook. Question: how can I use the proxy settings on Epic privacy browser to successfully log onto and use Facebook?

    Read the article

  • Window 2003 is PHP Limiting my Download Speed?

    - by JohnScout
    Hello, I have window 2003 100mbps server, i have tried using php script such as php indexer, zina pancake.org and others. The php script use to serve download such as images and music songs. I personally have 20mbps internet speed. When i use the php script (download pass thru PHP headers) , it will download at constant speed of 30-40KBps. I have tried different webserver such as apache 1.3, apache 2.2, abyss webserver & lighttpd for windows. The speed while relying on php is same constant 30-40KBps however when i tried direct link/straight from apache, the speed is 1MB/s. Is there any settings in Window 2003 Registry or PHP should i change to make the download speed is more faster when going thru PHP?

    Read the article

  • Changing languages rapidly causes Linux to crash.

    - by eZanmoto
    So I'm running Xmonad on my college computer (which runs Kubuntu) and whenever I leave my desk, instead of using x-screensaver which is incredibly buggy and slow, I just change to another workstation, open a terminal and change language to a language which uses symbols instead of letters, and then change back using an aliased command. For example, my .profile has the lines alias qwer="setxkbmap jp" alias *******="setxkbmap ie" where ******* is my password, using japanese characters. Changing languages seems to be much faster than running x-screensaver. The problem: rapidly changing languages seems to crash Linux; it just won't accept input (and it's not because the language hasn't changed back, nothing is output to the console). I can't use Ctrl+Alt+F1..F7, I can't "raise the elephants", anything, it just won't work. I'm just wondering, is this a known issue, and if so, is there something I can do about it?

    Read the article

  • Can not join comp to the domain... greyed out

    - by Logman
    I have an old WinXP Pro SP3 computer I need to join to the domain, simple right? not really. When I go to control panel - system - computer name and click on CHANGE ("rename this computer") everything is greyed out. I can not set it from workgroup to a domain. I am logged on locally as an admin. (Builtin account and one I created) I have checked local policy (gpedit.msc) on the comp, but it feels like a needle in the haystack. I could probably reload an image faster than trying to fix this...but I am curious so I post here to see if anyone knows of it/fix. I tried reseting the policy to defaults, but no luck: secedit /configure /cfg %windir%\repair\secsetup.inf /db secsetup.sdb /verbose EDIT:

    Read the article

  • Is there a way to get Drobo-like functionality out of ZFS (or some other free FS)? [closed]

    - by Steve Rowe
    I really like the concept of the Drobo, I just don't like the speed. I want the redundancy and easy upgradeability of the Drobo, but faster. I would love to be able to build something on my own. ZFS seems like a good place to start, but it has either upgradeability or redundancy (RAIDZ) but not both. To clarify, I want to be able to have an array of disks which are expandable by just adding a drive and have redundancy built in. I found instructions for making zfs act like a Drobo, but they are quite complicated and upgrading is a lot of work. Has anyone automated something like what is described there? Is there a different file system I should be looking at?

    Read the article

  • What is the best time to schedule regular updates on inhouse production server?

    - by akira
    Given an inhouse server running in production mode I would like to keep the impact on the users as low as possible when deploying regular updates (to the server itself, not the user machines .. but that would be a pretty similar problem). The obvious answer to my question is "at night, when the users are at home". But "night" is a long period of time. Should one start early in the evening to perhaps catch problems with the update early on and be ready to rollback? Or is it better to start early in the morning and use the first users as "guinea pigs" to faster trigger the problems? Or in the middle of the night when the concentration of the one overseeing the update is pretty low but it is guaranteed to have no open file handles of some late working users? Are there any research papers on the topic?

    Read the article

  • Deploying virtual machines - Windows Guest/Linux Host or vice versa?

    - by samoz
    I'm looking to deploy several virtual machines for users. They need access to both Windows and Linux. They also need to use the computers graphics card (for Photoshop, modeling, etc) under Windows. My question is, will an Ubuntu host/Windows guest or a Windows host/Ubuntu guest be faster? I'm somewhat worried about Windows getting a cluttered registry and slow, but on the otherhand, a Windows host would have direct access to hardware (Unless I'm just unaware of how to grant hardware access to a guest). Does the choice of software (VMware or VirtualBox) effect the choice?

    Read the article

  • C/PHP: How do I convert the following PHP JSON API script into a C plugin for apache?

    - by TeddyB
    I have a JSON API that I need to provide super fast access to my data through. The JSON API makes a simply query against the database based on the GET parameters provided. I've already optimized my database, so please don't recommend that as an answer. I'm using PHP-APC, which helps PHP by saving the bytecode, BUT - for a JSON API that is being called literally dozens of times per second (as indicated by my logs), I need to reduce the massive RAM consumption PHP is consuming ... as well as rewrite my JSON API in a language that execute much faster than PHP. My code is below. As you can see, is fairly straight forward. <?php define(ALLOWED_HTTP_REFERER, 'example.com'); if ( stristr($_SERVER['HTTP_REFERER'], ALLOWED_HTTP_REFERER) ) { try { $conn_str = DB . ':host=' . DB_HOST . ';dbname=' . DB_NAME; $dbh = new PDO($conn_str, DB_USERNAME, DB_PASSWORD); $params = array(); $sql = 'SELECT homes.home_id, address, city, state, zip FROM homes WHERE homes.display_status = true AND homes.geolat BETWEEN :geolatLowBound AND :geolatHighBound AND homes.geolng BETWEEN :geolngLowBound AND :geolngHighBound'; $params[':geolatLowBound'] = $_GET['geolatLowBound']; $params[':geolatHighBound'] = $_GET['geolatHighBound']; $params[':geolngLowBound'] =$_GET['geolngLowBound']; $params[':geolngHighBound'] = $_GET['geolngHighBound']; if ( isset($_GET['min_price']) && isset($_GET['max_price']) ) { $sql = $sql . ' AND homes.price BETWEEN :min_price AND :max_price '; $params[':min_price'] = $_GET['min_price']; $params[':max_price'] = $_GET['max_price']; } if ( isset($_GET['min_beds']) && isset($_GET['max_beds']) ) { $sql = $sql . ' AND homes.num_of_beds BETWEEN :min_beds AND :max_beds '; $params['min_beds'] = $_GET['min_beds']; $params['max_beds'] = $_GET['max_beds']; } if ( isset($_GET['min_sqft']) && isset($_GET['max_sqft']) ) { $sql = $sql . ' AND homes.sqft BETWEEN :min_sqft AND :max_sqft '; $params['min_sqft'] = $_GET['min_sqft']; $params['max_sqft'] = $_GET['max_sqft']; } $stmt = $dbh->prepare($sql); $stmt->execute($params); $result_set = $stmt->fetchAll(PDO::FETCH_ASSOC); /* output a JSON representation of the home listing data retrieved */ ob_start("ob_gzhandler"); // compress the output header('Content-type: text/javascript'); print "{'homes' : "; array_walk_recursive($result_set, "cleanOutputFromXSS"); print json_encode( $result_set ); print '}'; $dbh = null; } catch (PDOException $e) { die('Unable to retreive home listing information'); } } function cleanOutputFromXSS(&$value) { $value = htmlspecialchars($value, ENT_QUOTES, 'UTF-8'); } ?> How would I begin converting this PHP code over to C, since C is both better on memory management (since you do it yourself) and much, much faster to execute?

    Read the article

  • Why one loop is performing better than other memory wise as well as performance wise?

    - by Mohit
    I have following two loops in C#, and I am running these loops for a collection with 10,000 records being downloaded with paging using "yield return" First foreach(var k in collection) { repo.Save(k); } Second var collectionEnum = collection.GetEnumerator(); while (collectionEnum.MoveNext()) { var k = collectionEnum.Current; repo.Save(k); k = null; } Seems like that the second loop consumes less memory and it faster than the first loop. Memory I understand may be because of k being set to null(Even though I am not sure). But how come it is faster than for each. Following is the actual code [Test] public void BechmarkForEach_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); Profile("For Each Profiling",1,()=>{ var localenumertaor=contactService.Download(); foreach (var item in localenumertaor) { if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); } contactRepo.DeleteAll(); }); } [Test] public void BechmarkWhile_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); var itemsCollection = contactService.Download().GetEnumerator(); Profile("While Profiling", 1, () => { while (itemsCollection.MoveNext()) { var item = itemsCollection.Current; //if First time sync then ignore and overwrite the stateflag if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); item = null; } contactRepo.DeleteAll(); }); } static void Profile(string description, int iterations, Action func) { // clean up GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); // warm up func(); var watch = Stopwatch.StartNew(); for (int i = 0; i < iterations; i++) { func(); } watch.Stop(); Console.Write(description); Console.WriteLine(" Time Elapsed {0} ms", watch.ElapsedMilliseconds); } I m using the micro bench marking, from a stackoverflow question itself benchmarking-small-code The time taken is For Each Profiling Time Elapsed 5249 ms While Profiling Time Elapsed 116 ms

    Read the article

  • bottle.py on EC2 micro instance causes 2 order of magnitude slowdown

    - by user61633
    Cross-posted from StackOverflow: I wrote a little toy script to solve this type of game, and put it on my new micro EC2 instance. It works perfectly, but while it takes around 0.5 seconds to run a local version, and takes under 0.5 seconds to run both the local and the bottle.py version on my home computer, running the bottle.py version on the EC2 instance takes over 2 minutes. Python has the cpu pegged at 99% the entire time. Only 7.4% memory usage, consistently, and no swapping. The only guess I have is initialization time for bottle.py on EC2, but if it were that, why would it be ~200x faster on my own computer with bottle.py?

    Read the article

  • SQL server could not connect: Lacked Sufficient Buffer Space...

    - by chumad
    I recently moved my app to a new server - the app is written in c# against the 3.5 framework. The hardware is faster but the OS is the same (Win Server 2003). No new software is running. On the prior hardware the app would run for months with no problems. Now, in this new install, I get the following error after about 3 days, and the only way to fix it is to reboot: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.) I have yet to find a service I can even shut down to make it work. Anyone had this before and know a solution?

    Read the article

  • Configure redis to not have everything in memory?

    - by acidzombie24
    I like redis because it lets me do operations on data structures. I wanted to see what would happen if i were to put more data into redis then i have for ram. So i wrote a loop that inserted 30k bytes repeatedly and set maxmemory 100MB. I figure it would stay at 100mb. It kept growing. Past 1gb then past 2gb. Suddenly it crashed because i ran the 32bit version. Now... i dont understand what the point of maxmemory is? I am using the windows version so maybe its ignored. Does redis have to have everything in memory? If i have a site (on linux) with a 10gb database and 512mb on the machine will redis work? I don't need it to be amazingly fast i just prefer to modify data in it then sql (although i hope it is still faster then mysql)

    Read the article

  • Clone drive in Windows

    - by ERJ
    I have two 2 TB external USB hard drives, call them HD1 and HD2. HD1 is USB 2, HD2 is USB 3. Each drive contains exactly one NTFS partition. I want to clone HD1 to HD2, because it's newer and much, much faster. What's the best way to do this? I don't want to do a copy-and-paste, I want to clone the whole partition. The new drive is actually a few bytes larger, so this should be possible? I don't have a second drive that can hold the image, so it would have to clone directly to the other disk (not to a file). How can I accomplish this on Windows 7? I know about Clonezilla but I would prefer not to have to boot from a CD or anything, as I don't have the capability to do that right now. I want to know if there's a way to do this while running Windows.

    Read the article

  • Need help making a bootable portable USB hard drive. [Utility I need will only create bootable CDs or USB flash drives]

    - by Sootah
    I've got a copy of Spotmau's Bootsuite 2012, which is an utterly fantastic tool. It has completely replaced Bart PE for me, and I relied on BPE for YEARS. Anyway, the issue I'm having is that the Bootsuite installer utility will only create bootable USB flash drives, or bootable CDs. The USB hard drive is detected as a hard drive instead of as a USB device, and as such I cannot use the included app to install to the USB HDD. Is there a way of either copying the files from a bootable flash drive to a USB HDD and making that work, or of taking an .ISO of the bootable CD and using that to make the portable HD bootable? The flash drives I've made of it are great as I can always have it with me [have 16GB dangling from my keychain. :) ], but my USB hard drive is FAR faster than any flash drive I have, so I'd like to be able to use that when I'm working out of my office or happen to have it with me.

    Read the article

  • Can Visual Studio track the "size" or "severity" of my changes in TFS?

    - by anaximander
    I'm working on a sizeable project using VS2012 and TFS (also 2012, I think - I didn't set up the server). A lot of my recent tasks have required making very small changes to a lot of files, so I'm quite used to seeing a lot of items in my Pending Changes list. Is there a way to have VS and/or TFS track how much has been changed and let me know when the differences are becoming significant? Similarly, is there a way to quickly highlight where the major changes are when you get the latest version from TFS? It'd really help with tracking down where certain changes have been made without having to go through and compare every file - the difference highlighting tool might be nice, but when you have to use it on a dozen files to find the block you're looking for, you start to wonder if there's a faster way...

    Read the article

  • MPICH2 vs KERRYGHED

    - by user40135
    Hi All right now I am moving first steps in clustering. I installed MPICH2 on my Ubuntu at home and I have a silly question about it. For what I am reading right now it seems that it provides the capability of sending processes to other pcs. I went for this lib just because I set it up very quickly and easily. Compared to MPICH2 , do you know what is the advantage of having a different clustering system like KERRYGHED? It seems that these ones also provide this capability, but the Kernel must be rebuild, so I suppose that it is going to be faster. What other advantages are remarkable for a clustering system like this? Thanks

    Read the article

  • Why do Apache access logs - timer resolution issue?

    - by Rob
    When going through Apache 2.2 access logs, logging with the %D directive (The time taken to serve the request, in microseconds), that it's very common for a 200 response to have a given number of bytes, but a "time to serve" of zero. For example, a given URL might be requested 10 times in a single day, and a 200 response is sent for them all, and all return, say 1000 bytes. However, 7 of them have a "time to serve" of zero, while the other 3 have a time to serve of 1 second. Is this simply because the request was served faster than the resolution of the timer Apache uses?

    Read the article

  • Reading a file from an alternate location

    - by Highstaker
    I have a certain file (data.abc) located in, say, my home folder. I make a copy of it to another location (for example, "/mnt/ramtemp/"). Whenever the file in my home folder is accessed by any process, I want it to be read not from home folder, but from "/mnt/ramtemp/". As you might have guessed from the path of the latter, it is where I mount the ramfs. So, basically, I want a process to access not the file on my HDD (which is slower), but its copy on ramfs (which is way faster). At the same time, I want the file data.abc to remain in my home folder under that name, I don't want to rename or delete it. Is there any way I could guide the system to redirect the processes to read the file from alternative location whenever they try to read it from home folder?

    Read the article

  • ionice idle is ignored

    - by Ferran Basora
    I have been testing the ionice command for a while and the idle (3) mode seems to be ignored in most cases. My test is to run both command at the same time: du <big folder> ionice -c 3 du <another big folder> If I check both process in iotop I see no difference in the percentage of io utilization for each process. To provide more information about the CFQ scheduler I'm using a 3.5.0 linux kernel. I started doing this test because I'm experimenting a system lag each time a daily cron job updatedb.mlocate is executed in my Ubuntu 12.10 machine. If you check the /etc/cron.daily/mlocate file you realize that the command is executed like: /usr/bin/ionice -c3 /usr/bin/updatedb.mlocate Also, the funny thing is that whenever my system for some reason starts using swap memory, the updatedb.mlocate io process is been scheduled faster than kswapd0 process, and then my system gets stuck. Some suggestion? References: http://ubuntuforums.org/showthread.php?t=1243951&page=2 https://bugs.launchpad.net/ubuntu/+source/findutils/+bug/332790

    Read the article

  • netgear GS108TV2 RSTP configuration

    - by jhowland
    I have a large set of GS108TV2 units--my goal is to set up a network which is comprised of several loops for redundancy/fault tolerance. I have a minimal 3 switch loop configured, with RSTP enabled on two ports on each switch. I have my bridge max age set to 6, and my bridge forward delay set to 4, which are the minimum values allowed. Hello time is fixed at 2 seconds. The switches respond to a cable being removed from a socket, but it takes too long. I cannot get the switch to respond to a loss of connection on one of the redundant ports in less than 20 seconds. Is there any way to configure these switches to respond faster than 20 seconds? That is unacceptable for my application. thanks in advance for any help

    Read the article

  • Dell Inspiron 530 - SSD Worth it?

    - by DrFredEdison
    I'm going to be upgrading my Dell Inspiron 530 (2.0 Ghz Intel Dual Core CPU, 3 GB RAM) to windows 7 soon, and rather than backup and reformat my existing drive, I'm planning on getting a 2nd drive to replace my current primary, and moving it to a secondary. Thus, this seems like an excellent time to get a solid-state drive, if its going to be worth it. As far as I can tell this machine has a SATA-I controller, and I'm unsure if I'll see a noticeable performance increase with an SSD without going to SATA-II. So I have a three part question here given all that: Will spending the money on a SSD be worth it if hook it into a SATA-I controller? Is it reasonable to upgrade the controller on this machine to a SATA-II controller? Given that this PC is kind of old to begin with, am I better off performance wise to just stick with a faster HDD?

    Read the article

  • How do I disable Windows Search Indexing Service while on battery on Windows 7?

    - by Slaggg
    Since upgrading from Windows XP to Windows 7, I've found laptop battery life is reduced. One issue is that the Windows Search Indexing Service is constantly running, even when on battery power. Using Resource Monitor shows it is often the top consumer of CPU time and top producer of disk activity. I imagine this is one of the primary reasons the battery is getting drained faster. I cannot find any options to tell Windows Search Indexing Service to not index while on battery power. The only solution seems to be to stop the service, and also disable the service (if you just stop it, it will restart itself after a while). Doing this gives me considerably more battery life, but it's a pain because I have to re-enable the service when plugged in again. Is there a better solution?

    Read the article

  • How to work with bookmarks in Word without naming them?

    - by deepc
    I am working in a large Word 2007 document and need bookmarks to remember editing positions. I know I can manage bookmarks with shift+ctrl+F5 but that's cumbersome because I am used to do this a lot faster in the Delphi editor. There I create a bookmark with ctrl+shift+0..9 and jump to the bookmark with ctrl+0..9. In this way I have 10 quick bookmarks. I do not have to name them, I do not have to pick them from a dialog (because there is not even a dialog prompting me for a selection). Is something similar possible in Word, or has anybody made a macro for that purpose? Thanks.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >