Search Results

Search found 5161 results on 207 pages for 'jonathan low'.

Page 10/207 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • low speed web application, Server problem or Application

    - by Ashian
    Hi, I have a web application written by asp.net (c#) sql server 2005. we host it on 2 dedicated server ( IIS and SQL server ) From some month ago , in some days of week we have many reports about speed issue. we have some other application on this server using same database. when we have speed problem all aplication on these server have this problem, but applications on other server in same data center work correctly. ram and cpu usage are ok. how can I check that the problem related to internet connection or my application design? which parameters must be checked. Some other information In applications users can upload several files to server , each file up to 3 MB. we use a sql web admin application, on same server that has same problem, this is a standard application which work perfectly on other servers. Thanks

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • Viewsonic VG2427wm detected max resolution too low

    - by Hassan
    So I have a Viewsonic VG2427wm, which supports max resolution of 1920x1080 and I had it working at this resolution from both windows and linux PCs. Now, I need to connect it to my Dell XPS17 (L702x), which has only DP mini and HDMI, so I'm trying to connect this monitor via HDMI-to-DVI adapter (doesn't work at all) and via DP-mini-to-DVI adapter. The later works, but is limiting my max resolution to 1680x1050. Using same two adapters to connect my dell monitor works fine, so I don't suspect any of the adapters to be faulty. The driver is the latest viewsonic driver (although it's been last updated 3 years back). Any ideas, how I can force it to display it's correct native resolution?

    Read the article

  • Is there a way to sharpen low quality images

    - by Saif Bechan
    I have a lot of images for a project that are not really sharp and crisp. I think these were old images that were resized a lot etc. This is because I am working on a website for a client and she does not have the high quality pictures anymore, the pictures I have to work with are somewhat pixelated. Ill show you an example of one I think could be better: Now as you can see the parts around the text 'Azule' is not so sharp. Is there a way to sharpen this. Now I have another one, and I think there is no hope for this one. Is there any hope for the picture above to make it sharp again, I highly doubt it.

    Read the article

  • Stop Windows 8 from converting wallpapers to low quality JPGs

    - by Peter W.
    This has been bugging me on Windows 7 as well, and much to my chagrin Microsoft didn't bother addressing that issue in Windows 8. Take this image for example: http://mantia.me/goodies/desktops/supermariobros_wide.jpg It's made for retina Macbooks, so naturally it's being downscaled when setting it as wallpaper on my 1680x1050 screen. Windows 8 manages to make it look like shit, see this screenshot: http://d.pr/i/WGFy Is there any way of making Windows always the original image, not some JPGized-to-hell version of it?

    Read the article

  • Active Directory management with low user rights

    - by DemonWareXT
    Our problem: The client, a normal user, has to be able to reset multiple passwords at once. Around 30 in one go. This would call for powershell or something along these lines, but for AD and Powershell one needs to be domain administrator. My solution would be to make a service that runs on the AD server and take connections from a program. The service would then do the AD changes. So far so good, I would just like to hear some other thoughts on this problem. Because I sure can't be the only one with it

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • How computers display raw, low-level text and graphics

    - by panic
    My ever-growing interest in computers is making me ask deeper questions, that we don't seem to have to ask anymore. Our computers, at boot, as far as I understand it, are in text mode, in which a character can be displayed using the software interrupt 0x10 when AH=0x0e. We've all seen the famous booting font that always looks the same, regardless of what computer is booting. So, how on earth do computers output graphics at the lowest level, say, below the OS? And also, surely graphics aren't outputted a pixel at a time using software interrupts, as that sounds very slow? Is there a standard that defines basic outputting of vertices, polygons, fonts, etc. (below OpenGL for example, which OpenGL might use)? What makes me ask is why OS' can often be fine without official drivers installed; how do they do that? Apologies if my assumptions are incorrect. I would be very grateful for elaboration on these topics!

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • Starting VMs with an executable with as low overhead as possible

    - by Robert Koritnik
    Is there a solution to create a virtual machine and start it by having an executable file, that will start the machine? If possible to start as quickly as possible. Strange situation? Not at all. Read on... Real life scenario Since we can't have domain controller on a non-server OS it would be nice to have domain controller in an as thin as possible machine (possibly Samba or similar because we'd like to make it startup as quickly as possible - in a matter of a few seconds) packed in a single executable. We could then configure our non-server OS to run the executable when it starts and before user logs in. This would make it possible to login into a domain.

    Read the article

  • VPS showing low disk space despite there is nothing major on it

    - by SheoNarayyan
    Hello experts, On my VPS server I was trying to see the used disk space and when I open My Computer it shows 17.9 GB free out of 39.8 GB it means that 21.9 GB space is used. However, when I select all files and folders from C: and try to see the total size, it just count approximately 11 GB. The difference is around 10 GB. Where is this 10 GB going if I have not stored anything else here? I asked above question from my VPS provider and he responded below Check hidden files/system files/etc. This is default windows OS and its utilization and not specific to setup. If you want specifics of usage, you can go ahead and get in touch with Microsoft support team and they'll provide you with exact specification of the same. I am sure that Windows OS must not be taking up 10 GB space for hidden files and folders. My VPS has Windows Server 2008 R2 installed. Can anyone help me in this on who is right?

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • Starting VM as an executable with as low overhead as possible

    - by Robert Koritnik
    Is there a solution to create a virtual machine and start it by having an executable file, that will start the machine? If possible to start as quickly as possible. Strange situation? Not at all. Read on... Real life scenario Since we can't have domain controller on a non-server OS it would be nice to have domain controller in an as thin as possible machine (possibly Samba or similar because we'd like to make it startup as quickly as possible - in a matter of a few seconds) packed in a single executable. We could then configure our non-server OS to run the executable when it starts and before user logs in. This would make it possible to login into a domain.

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • Nginx and low-speed connections: request terminates after 253 seconds

    - by meze
    I'm trying to make nginx to handle static files. All is working fine except that when I throttle my connection speed to 8kbit/s, the loading process of a file just stops after 253-255 seconds (4.2 min according to chrome). No error in the log, the status code is 200 but the response is received partially. If I disable nginx and make apache to send the same file - it loads successfully after 10 minutes. The config I use for debugging is: client_header_buffer_size 16k; large_client_header_buffers 4 8k; client_max_body_size 50m; client_body_buffer_size 16k; client_header_timeout 20m; client_body_timeout 20m; send_timeout 20m; Did i miss some configurations?

    Read the article

  • Auto restart server if virtual memory is too low

    - by Sukhjinder Singh
    There are quite number of software running on my server: httpd, varnish, mysql, memcache, java.. Each of them is using a part of the virtual memory and varnish was configured to be allocated 3GB of memory to run. Due to high traffic load which is 100K, our server ran out of memory and oom-killer is invoked. We've to reboot the server. We have 8GB of Virtual Memory and due to some reason we cannot extend to larger memory. My question is - Is there any automated script, which will monitor how much virtual memory left and based upon certain criteria, lets say if 500MB left than restart the server automatically? I do know this is not the proper solution but we have to do it, otherwise we don't know when server will get OOM and by the time we know and restart the server, we lost our visiting users.

    Read the article

  • We want to setup low cost private cloud [closed]

    - by Virtual Jasper
    We are a small company with very limit funds. In order to improve our server reliability, we are studying to migrate to CLOUD. We seen some CLOUD provider, they would charge by resources such as, CPU, RAM.....Disk space....High Availability....etc. We have server team, so we also consider to built the private CLOUD, we seen the Windows 8 server, it does need license fee. So we looking at Linux side, we look at Ubuntu and OpenStack. What is the different between Ubuntu and OpenStack solutions? Is it both free on software license? and only to pay the technical support.

    Read the article

  • Windows Server 2008 Alerting to Low memory

    - by t1nt1n
    I have a file and print server running on Windows 2008 R2 fully patched in a VSphere environment (ESXi 5.1 fully updated). Every evening between 19:20 and 19:30 our monitoring software reported that the available memory is 1% and performance is dire. There is nothing in the event logs to point to an issue. At this point in the evening I am general the only user on the system to check to see why these alerts are going off. Things I have done; Checked to see if any backups are running – None at all. Checked Scheduled tasks – None before or during this time period. Moved the VM to another host. AV is disabled to rule out that as the issue. The server does not have any problems during the day with memory when fully loaded with about 50 users. The server did have 4GB ram provisioned but I have increased this to 5Gb. Running PrefMon at the time (I will save the graphs tonight) There very little CPU usage at the time but RAM usage goes up.

    Read the article

  • Threads, Sockets, and Designing Low-Latency, High Concurrency Servers

    - by lazyconfabulator
    I've been thinking a lot lately about low-latency, high concurrency servers. Specifically, http servers. http servers (fast ones, anyway) can serve thousands of users simultaneously, with very little latency. So how do they do it? As near as I can tell, they all use events. Cherokee and Lighttpd use libevent. Nginx uses it's own event library performing much the same function of libevent, that is, picking a platform optimal strategy for polling events (like kqueue on *bsd, epoll on linux, /dev/poll on Solaris, etc). They all also seem to employ a strategy of multiprocess or multithread once the connection is made - using worker threads to handle the more cpu intensive tasks while another thread continues to listen and handle connections (via events). This is the extent of my understanding and ability to grok the thousand line sources of these applications. What I really want are finer details about how this all works. In examples of using events I've seen (and written) the events are handling both input and output. To this end, do the workers employ some sort of input/output queue to the event handling thread? Or are these worker threads handling their own input and output? I imagine a fixed amount of worker threads are spawned, and connections are lined up and served on demand, but how does the event thread feed these connections to the workers? I've read about FIFO queues and circular buffers, but I've yet to see any implementations to work from. Are there any? Do any use compare-and-swap instructions to avoid locking or is locking less detrimental to event polling than I think? Or have I misread the design entirely? Ultimately, I'd like to take enough away to improve some of my own event-driven network services. Bonus points to anyone providing solid implementation details (especially for stuff like low-latency queues) in C, as that's the language my network services are written in.

    Read the article

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • Java: Send BufferedImage through Socket with a low bitdepth

    - by Martijn Courteaux
    Hi, The title says enough I think. I have a full quality BufferedImage and I want to send it through an OutputStream with a low bitdepth. I don't want an algorithm to change pixel by pixel the quality, so it is still a full-quality. So, the goal is to write the image (with the full resolution) through the OuputStream with a very small size. Thanks, Martijn

    Read the article

  • AVCam memory low warning

    - by Red Nightingale
    This is less a question and more a record of what I've found around the AVCam sample code provided by Apple for iOS4 and 5 camera manipulation. The symptoms of the problem for me were that my app would crash on launching the AVCamViewController after taking around 5-10 photos. I ran the app through the memory leak profiler and there were no apparent leaks but on inspection with the Activity Monitor I discovered that something called mediaserverd was increasing by 17Mb every time the camera was launched and when it reached ~100Mb the app crashed with multiple low memory warnings.

    Read the article

  • Matlab - applying low-pass filter to a vector?

    - by waitinforatrain
    If I have a simple low-pass filter, e.g. filt = fir1(20, 0.2); and a matrix with a list of numbers (a signal), e.g. [0.1, -0.2, 0.3, -0.4] etc, how do I actually apply the filter I've created to this signal? Seems like a simple question but I've been stuck for hours. Do I need to manually calculate it from the filter coefficients?

    Read the article

  • Converting WAV to MP3 on Linux with low bitrates

    - by Olly
    I need to convert WAV files to MP3 files so they can be played on a website. I think that LAME would probably be the best tool. However the WAV files are low bitrate (around 8kbits recorded from a phone) and LAME's website states that it is the "best MP3 encoder at mid-high bitrates and at VBR". Is there is a better encoder for lower bitrates? If so can you define "better"?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >