Search Results

Search found 2558 results on 103 pages for 'significant digits'.

Page 96/103 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • bandwidth throttling C linux

    - by bob moch
    hi im currently creating a function to create a sleep time i can pause between packets for my port scanner im creating for personal/educational use for my home network. what im currently doing is opening /proc/net/dev and reading the 9th set of digits for the eth0 interface to find out the current packets being set and then reading it again and doing some math to figure out a delay to sleep between sending a packet to a port to identify it and fingerprint it. my problem is that no matter what throttle % i use it always seems to send the same rate of packets. i think its mainly my way of mathematically creating my sleep delay. edit:: dont mind the function declaration and the struct stuff all im doing is spawning this function in a thread and passing a pointer to a struct to the function, recreating the struct locally and then freeing the passed structs memory. void *bandwidthmonitor_cmd(void *param) { char cmdline[1024], *bytedata[19]; int i = 0, ii = 0; long long prevbytes = 0, currentbytes = 0, elapsedbytes = 0, byteusage = 0, maxthrottle = 0; command_struct bandwidth = *((command_struct *)param); free(param); //printf("speed: %d\n throttle: %d\n\n", UPLOAD_SPEED, bandwidth.throttle); maxthrottle = UPLOAD_SPEED * bandwidth.throttle / 100; //printf("max throttle:%lld\n", maxthrottle); FILE *f = fopen("/proc/net/dev", "r"); if(f != NULL) { while(1) { while(fgets(cmdline, sizeof(cmdline), f) != NULL) { cmdline[strlen(cmdline)] = '\0'; if(strncmp(cmdline, " eth0", 6) == 0) { bytedata[0] = strtok(cmdline, " "); while(bytedata[i] != NULL) { i++; bytedata[i] = strtok(NULL, " "); } bytedata[i + 1] = '\0'; currentbytes = atoi(bytedata[9]); } } i = 0; rewind(f); elapsedbytes = currentbytes - prevbytes; prevbytes = currentbytes; byteusage = 8 * (elapsedbytes / 1024); //printf("usage:%lld\n",byteusage); if(ii & 0x40) { SLEEP += (maxthrottle - byteusage) * -1.1;//-2.5; if(SLEEP < 0){ SLEEP = 0; } //printf("sleep:%d\n", SLEEP); } usleep(25000); ii++; } } return NULL; } SLEEP and UPLOAD_SPEED are global variables and UPLOAD_SPEED is in kb/s and generated via a speedtest function that gets the upload speed of my computer. this function is running inside a POSIX thread updating SLEEP which my threads doing the socket work grab to sleep by after every packet. as testing instead of only doing the ports i want to check i make it do all the ports over and over again so i can run dstat on a machine to check bandwidth and no matter what bandwidth.throttle is set to it always seems to generate the same amount of bandwidth to the dstat machine. the way i calculate how much i "should" throttle by is by finding the maximum throttle speed which is defined as maxthrottle = upload_speed * throttle / 100; for example if my upload speed was 1000kb/s and my throttle was 90 (90%) my max throttle would be 900kb/s from there it would find the current bytes sent from /proc/net/dev and then find my sleep time via incrementing or decrementing it via sleep += (maxthrottle - bytesysed) * -1.1; this should in theory increase or decrease the sleep time based on how many bytes used there are. the if(ii & 0x40) statement is just for some moderation control. it makes it so it only sets sleep to a new time every 30-40 iterations. final notes: the main problem is that the sleep timer does not seem to modify the speed of packets being set. or maybe its just my implementation because on a freshly restarted machine where /proc/net/dev has low numbers of bytes sent it seems to raise the sleep timer accordingly on my 60kb/s upload machine (ex if i set the throttle to 2 it will incline the sleep timer until network bandwidth out reaches the max bandwidth threshold, but when i try running it on a server which as been online forever it doesnt seem to work as nicely if at all. if anyone can suggest a new method of monitoring the network to adjust a sleep delay then let me know or if anyone sees a flaw in my code. thank you.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 22, 2011

    CodePlex Daily Summary for Tuesday, November 22, 2011Popular ReleasesDeveloper Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...New ProjectsAndrecorder: Andrecorder???Android???????,???????????????????,????????????????,????????!Android Tree Bulletin: Android bulletin reader in tree format.Bài t?p l?p môn HCI: Name: Ph?n m?m qu?n lý thu h?c phí tru?ng d?i h?c Công Nghi?p Hà N?i Basic Grid Collision sample in XNA: This project shows how to implement a basic grid collision in XNA. The project uses the XNA 4.0 framework and C#Club Manager: Club Manager is a web site for managing sport clubs / teams.Create email with encrypt text implement TEA encryption and Web Service: RahaTEA Mail is an application to send messages in secret. These applications implement TEA encryption and web serviceCRM 2011 Layers: Several .net layers to customize CRM 2011CTEF: China Tomorrow Education Foundation websitedns?????: ??c#???dns?????。????????,???????,??????。EAF: Extensibility Application FrameworkEnergy SBA: In order to compete with large companies for Federal contracts, small business need information. This application seeks to show standard methods of using remote APIs to integrate information into a Metro interface using services provided by the Small Business Administration (SBA)EPiOptimiser - Scan your EPiServer configuration to optimise start up times: EPiScanner scans your EPiServer configuration to optimise start ups by generating a recommended exclude list of assemblies to include in EPiServer framework config. It can be used on command line, as a custom build task or integrated into Visual Studio as an external tool.FreeIDS - Free Intrusion Detection System: Don't want someone to use your computer? Don't want to use a system password? Want to see when someone accessed your computer? Time/Date? FreeIDS is it!FtpServerAdministrator: FtpServerAdministrator makes it easier to administer some ftp server by code, although it can only be used for FileZilla server now. It's developed in C#.GreenPoint Online: Tools and components that help you customize an Office 365 / SharePoint Online Environment.HCC C# Workshop: This project contains the code for the exercises of the HCC C# WorkshopKsigDo - Real time view model syncing across user screens: KsigDo show real time view model syncing across user screens - using ASP.NET, Knockout and SignalR. Real time data syncing across user views *was* hard, especially in web applications. Most of the time, the second user needs to refresh the screen, to see the changes made by first user, or we need to implement some long polling that fetches the data and does the update manually. Now, with SignalR and Knockout, ASP.NET developers can take advantage of view model syncing across users, that...lineseven: ???????????????。Mail Size Labeler for GMail: A small utility that labels large e-mails on your gmail account. This utility scan you gmail account, and adds labels to large e-mail so you can clean your mailbox and free space. The labels this utility adds are: Size 1M-2M Size 2M-5M Size 5M-10M Size 10M-15M Size 15M plus Note: a single e-mail thread may get multiple labels if different e-mails of the thread fit different filters.MathService: Complex digits, standart class extentions etc.MyGameProject: gamesMySQL Connect 2 ASP.NET: Example project to show how to connect MySQL database to ASP.NET web project. IDE: Visual Studio 2010 Pro Programming language: C# Detailed information in the article here: http://epavlov.net/blog/2011/11/13/connect-to-mysql-in-visual-studio/ nl: Nutri Leaf Devomr.event.js: Simple js event injecterPastebin4DotNet: This project is an example of how to consume an API, in this case I consummed the Pastebin API.Pomelo: Pomelo is a website example.QuickDevFrameWork: ????????,??,??,????,ioc ?????postsharp?aopReadable Passphrase Generator: Generates passphrases which are (mostly) grammatically correct but nonsensical. These are easy to remember but difficult to guess (for humans or computers). Developed in C# with a KeePass plugin, console app and public API.Rosyama.ru for Windows Phone 7: ?????????? Windows Phone 7 ??? ???????? ???????? ?? ???? rosyama.ru. ?????????? ??????? ?????????? ? ???????? ????????? ???????. SimpleBatch: As the name suggests, this is a simple batch framework allowing you to define batch jobs in XML format. Thus far, contains a basic selection of processors such as the following; File Email SQL (SQL Server Client) SharePoint Document Library Custom ProcessorSite de Notícias: Projeto de faculdade que consiste na criação de um site de notícias.SPWikiProvisioning: Create update and delete SharePoint wiki pages using feature activation and deactivation handlers.SVN Automated Control With C#: I Created this libaray because I need to control Tortoise SVN automactically with out an interface for my own build server and could not find any resuilts on google to achive this task so I went about creating this libaray which dos most of the task's that I needed. I round that you could control SVN by command line so using that as my basic idear I went about coding the most common commands for SVN most of the commads are done but not all. if you like this libaray then please use it we...TremplinCMS: TremplinCMS is a CMS framework for ASP .NET 4.vlu0206sms: SMSMaker by team0206 developingWCF DataService RequestStream Access on webInvoke HTTP POST: This library provides access to the message body request stream of a WCF Data Service (formerly ADO.NET Data Service), which is not possible with the original WCF Data Service class. You are enabled passing data (e.g. Json, files) via HTTP POST to the request body. It uses the operation context (DbContext) provided by the DataService<T> class to get access to the resquest stream.WebOS: Welcome to join us to build our os projectWp7StarterDantas: Iniciando com Wp7WpfCollaborative3D: WpfCollaborative3DXNA Content Preprocessor: The XNA Content Preprocessor allows you to compile all of your XNA assets outside of your normal XNA project. This means more time building your game or app instead of your content.

    Read the article

  • Apache load balancer limits with Tomcat over AJP

    - by PAS
    Hi All, I have Apache acting as a load balancer in front of 3 Tomcat servers. Occasionally, Apache returns 503 responses, which I would like to remove completely. All 4 servers are not under significant load in terms of CPU, memory, or disk, so I am a little unsure what is reaching it's limits or why. 503s are returned when all workers are in error state - whatever that means. Here are the details: Apache config: <IfModule mpm_prefork_module> StartServers 30 MinSpareServers 30 MaxSpareServers 60 MaxClients 200 MaxRequestsPerChild 1000 </IfModule> ... <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> # Tomcat HA cluster <Proxy balancer://mycluster> BalancerMember ajp://10.176.201.9:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.201.10:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.219.168:8009 keepalive=On retry=1 timeout=1 ping=1 </Proxy> # Passes thru track. or api. ProxyPreserveHost On ProxyStatus On # Original tracker ProxyPass /m balancer://mycluster/m ProxyPassReverse /m balancer://mycluster/m Tomcat config: <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Engine> </Service> </Server> Apache error log: [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.10:8009 (10.176.201.10) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.10) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.10 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.9:8009 (10.176.201.9) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.9) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.9 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.219.168:8009 (10.176.219.168) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.219.168) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.219.168 [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state Load balancer top info: top - 23:44:11 up 210 days, 4:32, 1 user, load average: 0.10, 0.11, 0.09 Tasks: 135 total, 2 running, 133 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.2%id, 0.1%wa, 0.0%hi, 0.1%si, 0.3%st Mem: 524508k total, 517132k used, 7376k free, 9124k buffers Swap: 1048568k total, 352k used, 1048216k free, 334720k cached Tomcat top info: top - 23:47:12 up 210 days, 3:07, 1 user, load average: 0.02, 0.04, 0.00 Tasks: 63 total, 1 running, 62 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2097372k total, 2080888k used, 16484k free, 21464k buffers Swap: 4194296k total, 380k used, 4193916k free, 1520912k cached Catalina.out does not have any error messages in it. According to Apache's server status, it seems to be maxing out at 143 requests/sec. I believe the servers can handle substantially more load than they are, so any hints about low default limits or other reasons why this setup would be maxing out would be greatly appreciated.

    Read the article

  • Need help diagnosing network performance issues

    - by tokes
    I am currently working in a developing country as a system analyst for a government department. My area of expertise is software projects, but I've come across a few issues with the network setup in my office. (Unfortunately, being a developing country, there's not a lot of professional help available for this sort of thing.) Most recently, I am trying to diagnose a problem with slowness on the network. Our office is connected to the internet via an ADSL wireless modem/router (called Router). The modem is connected via ethernet to a switch (called Switch). The modem also acts as a wireless access point (called Wireless1), though because it is in a room at the end of the floor, it's range is limited. There are ethernet ports installed around the office. The cables of these all lead back to the same switch. In closer vicinity to the bulk of the client computers, there is another wireless router that acts as an access point for those clients (called Wireless2). That router is connected via ethernet to a wall port, and therefore to Switch. There is also a Windows server which acts as a DNS server (called DNSBox) which is located in the same room and is connected directly to Switch. ---Internet----------| Router/Wireless1 192.168.10.1 ---------------| |----|=========| DNSBox | |-------------------- 192.168.10.4 --------------------| Switch |---Other clients---- | |-------------------- |----|=========| Wireless2 ------------------| 192.168.10.198 One final thing to mention about the network setup. All clients are configured with manual IP addresses. Their router/gateway is set to the IP address of Router, and their DNS server is set to the IP address of DNSBox (with a secondary IP set to an external IP - that of our ISP's DNS server). Here are the symptoms we are experiencing: Clients connected to Wireless2 AP experience slow and unstable connections to the internet. (Slow here is defined as speeds of ~1KB/s, though ping response times seem to be as normal.) Clients connected via ethernet to Switch also experience the same slowness. Clients connected to Wireless1 AP (i.e. connecting via wireless directly to the ADSL modem) experience normal connections to the internet. Clients connected via ethernet to Router (i.e. connecting via ethernet directly to the ADSL modem) also experience normal connections to the internet. I also tried to gauge the connection performance between two machines on the network via ethernet: A file transfer between two clients who were both directly connected to Switch was the fastest; A file transfer between one client directly connected to Switch, and one client directly connected to Router (which is directly connected to Switch) performed much slower; A file transfer between two clients directly connected to Router also performed slowly. Things I have attempted to diagnose the problem: Restarted Switch -- no change. We tried unplugging ethernet jacks from Switch 4 at a time and testing the internet connection. The thought here was that perhaps a client on the network has contracted a virus, and is possibly spamming the network with traffic? (Not very technical, I know.) Unfortunately we couldn't get any significant increases in performance using this method. There were a couple of times when it seemed to be better, but then the connection speed quickly dropped back to slow/dead pace. I didn't want to unplug all jacks from Switch because I was concerned that users might be affected or that I would re-plug in the jacks incorrectly (should I even be worried about that? a port is a port on a switch, right?) I tried swapping the ethernet cable used to connect Router to Switch -- no change in performance. I tried swapping the port used on Switch for Router -- no change in performance. Anyone got any ideas on what this could be? Should I be mentioning specific brand names/models of the hardware used? Virii outbreaks are common in this country/office -- what could I be doing to figure out if a virus is at fault? If it is a virus, it doesn't seem to be generating a lot of traffic to/from the internet, because a) I can still get a good speed if I am directly connected to Router / Wireless1 and b) our ISP data usage has not risen suspiciously. Thanks for your help! Update #1 Here are the specs of some of the hardware: Switch is an Edimax ES3132RL 32-Port 10/100 Rackmount Switch Router is a D-Link DSL-G604T Update #2 I just tried unplugging everything except a laptop and Router from Switch. Speeds are still slow. I guess that means that Router / Switch are not being flooded? It seems more and more likely that the cause is something to do with the interaction between Router and Switch. However, I still can't find any useful resources on setting the LAN speed for either (and I'm not well-versed in these advanced networking configurations).

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • NetApp erroring with: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT

    - by Sobrique
    Since a sitewide upgrade to Windows 7 on desktop, I've started having a problem with virus checking. Specifically - when doing a rename operation on a (filer hosted) CIFS share. The virus checker seems to be triggering a set of messages on the filer: [filerB: auth.trace.authenticateUser.loginTraceIP:info]: AUTH: Login attempt by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20 (server-wk8-r2). [filerB: auth.dc.trace.DCConnection.statusMsg:info]: AUTH: TraceDC- attempting authentication with domain controller \\MYDC. [filerB: auth.trace.authenticateUser.loginRejected:info]: AUTH: Login attempt by user rejected by the domain controller with error 0xc0000199: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT. [filerB: auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: Delaying the response by 5 seconds due to continuous failed login attempts by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20. This seems to specifically trigger on a rename so what we think is going on is the virus checker is seeing a 'new' file, and trying to do an on-access scan. The virus checker - previously running as LocalSystem and thus sending null as it's authentication request is now looking rather like a DOS attack, and causing the filer to temporarily black list. This 5s lock out each 'access attempt' is a minor nuisance most of the time, and really quite significant for some operations - e.g. large file transfers, where every file takes 5s Having done some digging, this seems to be related to NLTM authentication: Symptoms Error message: System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. A packet trace of the failure will show the error as: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT (0xC0000199) Cause Microsoft has changed the functionality of how a Local System account identifies itself during NTLM authentication. This only impacts NTLM authentication. It does not impact Kerberos Authentication. Solution On the host, please set the following group policy entry and reboot the host. Network Security: Allow Local System to use computer identity for NTLM: Disabled Defining this group policy makes Windows Server 2008 R2 and Windows 7 function like Windows Server 2008 SP1. So we've now got a couple of workaround which aren't particularly nice - one is to change this security option. One is to disable virus checking, or otherwise exempt part of the infrastructure. And here's where I come to my request for assistance from ServerFault - what is the best way forwards? I lack Windows experience to be sure of what I'm seeing. I'm not entirely sure why NTLM is part of this picture in the first place - I thought we were using Kerberos authentication. I'm not sure how to start diagnosing or troubleshooting this. (We are going cross domain - workstation machine accounts are in a separate AD and DNS domain to my filer. Normal user authentication works fine however.) And failing that, can anyone suggest other lines of enquiry? I'd like to avoid a site wide security option change, or if I do go that way I'll need to be able to supply detailed reasoning. Likewise - disabling virus checking works as a short term workaround, and applying exclusions may help... but I'd rather not, and don't think that solves the underlying problem. EDIT: Filers in AD ldap have SPNs for: nfs/host.fully.qualified.domain nfs/host HOST/host.fully.qualified.domain HOST/host (Sorry, have to obfuscate those). Could it be that without a 'cifs/host.fully.qualified.domain' it's not going to work? (or some other SPN? ) Edit: As part of the searching I've been doing I've found: http://itwanderer.wordpress.com/2011/04/14/tread-lightly-kerberos-encryption-types/ Which suggests that several encryption types were disabled by default in Win7/2008R2. This might be pertinent, as we've definitely had a similar problem with Keberized NFSv4. There is a hidden option which may help some future Keberos users: options nfs.rpcsec.trace on (This hasn't given me anything yet though, so may just be NFS specific). Edit: Further digging has me tracking it back to cross domain authentication. It looks like my Windows 7 workstation (in one domain) is not getting Kerberos tickets for the other domain, in which my NetApp filer is CIFS joined. I've done this separately against a standalone server (Win2003 and Win2008) and didn't get Kerberos tickets for those either. Which means I think Kerberos might be broken, but I've no idea how to troubleshoot further. Edit: A further update: It looks like this may be down Kerberos tickets not being issued cross domain. This then triggers NTLM fallback, which then runs into this problem (since Windows 7). First port of call will be to investigate the Kerberos side of things, but in neither case do we have anything pointing at the Filer being the root cause. As such - as the storage engineer - it's out of my hands. However, if anyone can point me in the direction of troubleshooting Kerberos spanning two Windows AD domains (Kerberos Realms) then that would be appreciated. Options we're going to be considering for resolution: Amend policy option on all workstations via GPO (as above). Talking to AV vendor about the rename triggering scanning. Talking to AV vendor regarding running AV as service account. investigating Kerberos authentication (why it's not working, whether it should be).

    Read the article

  • Unusually high memory usage on a CentOS VPS with 512 guaranteed RAM

    - by Andrei Bârsan
    I'm working on a medium-sized web application written in PHP that's running on a VPS with 512mb ram. The webapp hasn't been officially launched yet, so there isn't too much traffic going on, just me and a few other people working on it. There is another slightly smaller webapp also hosted on this machine, among 4-5 other small static sites. We are running Centos 5 32-bit & cPanel/WHM. This is the result of running ps aux and, as you can see, it's not using 100% of the RAM. However, on the hypanel overview, it's always shown as using aroun 500MB ram, just for running apache, mysql, and the lowest-memory-footprint versions of the mail server, ftp server etc. -bash-3.2# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2156 664 ? Ss 12:08 0:00 init [3] root 1123 0.0 0.0 2260 548 ? S<s 12:08 0:00 /sbin/udevd -d root 1462 0.0 0.0 1812 568 ? Ss 12:08 0:00 syslogd -m 0 named 1496 0.0 0.0 3808 820 ? Ss 12:08 0:00 nsd named 1497 0.0 0.0 10672 756 ? S 12:08 0:00 nsd named 1499 0.0 0.0 3880 584 ? S 12:08 0:00 nsd root 1514 0.0 0.1 7240 1064 ? Ss 12:08 0:00 /usr/sbin/sshd root 1522 0.0 0.0 2832 832 ? Ss 12:08 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 1534 0.0 0.1 3712 1328 ? S 12:08 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql - mysql 1667 0.0 2.9 225680 30884 ? Sl 12:08 0:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql - mailnull 1766 0.0 0.1 9352 1100 ? Ss 12:08 0:00 /usr/sbin/exim -bd -q60m root 1797 0.0 0.0 2156 708 ? Ss 12:08 0:00 /usr/sbin/dovecot root 1798 0.0 0.0 2632 1012 ? S 12:08 0:00 dovecot-auth root 1816 0.0 3.0 38580 32456 ? Ss 12:08 0:01 /usr/local/bin/spamd -d --allowed-ips=127.0.0.1 --pidfi root 1839 0.0 1.6 63200 17496 ? Ss 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 1846 0.0 0.1 5416 1468 ? Ss 12:08 0:00 pure-ftpd (SERVER) root 1848 0.0 0.1 6212 1244 ? S 12:08 0:00 /usr/sbin/pure-authd -s /var/run/ftpd.sock -r /usr/sbin root 1856 0.0 0.1 4492 1112 ? Ss 12:08 0:00 crond root 1864 0.0 0.0 2356 428 ? Ss 12:08 0:00 /usr/sbin/atd dovecot 1927 0.0 0.1 5196 1952 ? S 12:08 0:00 pop3-login dovecot 1928 0.0 0.1 5196 1948 ? S 12:08 0:00 pop3-login dovecot 1929 0.0 0.1 5316 2012 ? S 12:08 0:00 imap-login dovecot 1930 0.0 0.2 5416 2228 ? S 12:08 0:00 imap-login root 1939 0.0 0.1 3936 1964 ? S 12:08 0:00 cPhulkd - processor root 1963 0.0 0.8 15876 8564 ? S 12:08 0:00 cpsrvd (SSL) - waiting for connections root 1966 0.0 0.7 15172 7748 ? S 12:08 0:00 cpdavd - accepting connections on 2077 and 2078 root 1990 0.0 0.2 5008 3136 ? S 12:08 0:00 queueprocd - wait to process a task root 2017 0.0 2.9 38580 31020 ? S 12:08 0:00 spamd child root 2018 0.0 0.5 8904 5636 ? S 12:08 0:00 /usr/bin/perl /usr/local/cpanel/bin/leechprotect nobody 2021 0.0 3.2 66512 33724 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2022 0.0 3.1 67812 33024 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2024 0.0 1.9 64364 20680 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 2027 0.0 0.4 9000 4540 ? S 12:08 0:00 tailwatchd root 2032 0.0 0.1 4176 1836 ? SN 12:08 0:00 cpanellogd - sleeping for logs nobody 3096 0.0 1.9 64572 20264 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3097 0.0 2.8 66008 30136 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3098 0.0 2.8 65704 29752 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3099 0.0 3.1 67260 32816 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL andrei 3448 0.0 0.1 3204 1632 ? S 12:50 0:00 imap nobody 3537 0.0 1.9 64308 20108 ? S 13:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3614 0.0 1.9 64576 20628 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3615 0.0 1.3 63200 14672 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 3626 0.0 0.2 10232 2964 ? Rs 13:14 0:00 sshd: root@pts/0 root 3648 0.0 0.1 3844 1600 pts/0 Ss 13:14 0:00 -bash root 3826 0.0 0.0 2532 908 pts/0 R+ 13:21 0:00 ps aux Lately, without any significant changes to the configuration, the memory usage started peaking and going over 512, causing the virtual server to kill apache, basically murdering our site in the process. Do you have any idea if this is normal and more resources should be acquired? I don't think... since there isn't too much data or traffic online yet.

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • Should I go along with my choice of web hosting company or still search?

    - by Devner
    Hi all, I have been searching for a good website hosting company that can offer me all the services that I need for hosting my PHP & MySQL based website. Now this is a community based website and users will be able to upload pictures, etc. The hosting company that I have in mind, currently lets me do everything... let me use mail(), supports CRON jobs, etc. Of course they are charging about $6/month. Now the only problem with this company is that they have a limit of 50,000 files that can exist within the hosting account at any time. This kind of contradicts their frontpage ad of "UNLIMITED SPACE" on their website. Apart from this, I know of no other reason why I should not go with this hosting company. But my issue is that 50,000 file limit is what I cannot live with, once the users increase in significant number and the files they upload, exceed 50,000 in number. Now since this is a dynamic website and also includes sensitive issues like payments, etc. I am not sure if I should go ahead with this company as I am just starting out and then later switch over to a better hosting company which does not limit me with 50,000 files. If I need to switch over once I host with this company, I will need to take backups of all the files located in my account (jpg, zip, etc.), then upload them to the new host. I am not aware of any tools that can help me in this process. Can you please mention if you know any? I can go ahead with the other companies right now, but their cost is double/triple of the current price and they all sport less features than my current choice. If I pay more, then they are ready to accommodate my higher demands. Unfortunately, the company that I am willing to go with now, does NOT have any other higher/better plans that I can switch to. So that's the really really bad part. So my question(s): Since I am starting out with my website and since the scope of users initially is going to be less/small, should I go ahead with the current choice and then once the demand increases, switch over to a better provider? If yes, how can I transfer my database, especially the jpg files, etc. to the new provider? I don't even know the tools required to backup and restore to another host. (I don't like this idea but still..) Should I go ahead and pay more right now and go with better providers (without knowing if the website is going to do really that well) just for saving myself the trouble of having to take a backup of the 50,000 files and upload to a new host from an old host and just start paying double/triple the price without even knowing if I would receive back the returns as I expected? Backup and Restore in such a bulky numbers is something that I have never done before and hence I am stuck here trying to decide what to do. The price per month is also a considerable factor in my decision. All these web hosting companies say one common thing: It is customers responsibility to backup and restore data and they are not liable for any loss. So no matter what hosting company that I would like to go with, they ask me to take backup via FTP so that I can restore them whenever I want (& it seems to be safer to have the files locally with me). Some are providing tools for backup and some are not and I am not sure how much their backup tools can be trusted considering the disclaimers they have. I have never backed-up and restored 50,000 files from one web host to another, so please, all you experienced people out there, leave your comments and let me know your suggestions so that I can decide. I have spent 2 days fighting with myself trying to decide what to do and finally concluded that this is a double-edged sword and I can't arrive at a satisfactory final decision without involving others suggestions. I believe that someone must be out there who may have had such troublesome decision to make. So all your suggestions to help me make my decision are appreciated. Thank you all.

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • I created a custom (WPF) DataGridBoundColumn and get unexpected behaviour, what am I missing?

    - by aspic
    Hi, I am using a DataGrid (from Microsoft.Windows.Controls.DataGrid) to display items on and on this DataGrid I use a custom Column which extends DataGridBoundColumn. I have bound an ObservableCollection to the ItemSource of the DataGrid. Conversation is one of my own custom datatypes which a (among other things) has a boolean called active. I bound this boolean to the DataGrid as follows: DataGridActiveImageColumn test = new DataGridActiveImageColumn(); test.Header = "Active"; Binding binding1 = new Binding("Active"); test.Binding = binding1; ConversationsDataGrid.Columns.Add(test); My custom DataGridBoundColumn DataGridActiveImageColumn overrides the GenerateElement method to let it return an Image depending on whether the conversation it is called for is active or not. The code for this is: namespace Microsoft.Windows.Controls { class DataGridActiveImageColumn : DataGridBoundColumn { protected override FrameworkElement GenerateElement(DataGridCell cell, object dataItem) { // Create Image Element Image myImage = new Image(); myImage.Width = 10; bool active=false; if (dataItem is Conversation) { Conversation c = (Conversation)dataItem; active = c.Active; } BitmapImage myBitmapImage = new BitmapImage(); // BitmapImage.UriSource must be in a BeginInit/EndInit block myBitmapImage.BeginInit(); if (active) { myBitmapImage.UriSource = new Uri(@"images\active.png", UriKind.Relative); } else { myBitmapImage.UriSource = new Uri(@"images\inactive.png", UriKind.Relative); } // To save significant application memory, set the DecodePixelWidth or // DecodePixelHeight of the BitmapImage value of the image source to the desired // height or width of the rendered image. If you don't do this, the application will // cache the image as though it were rendered as its normal size rather then just // the size that is displayed. // Note: In order to preserve aspect ratio, set DecodePixelWidth // or DecodePixelHeight but not both. myBitmapImage.DecodePixelWidth = 10; myBitmapImage.EndInit(); myImage.Source = myBitmapImage; return myImage; } protected override FrameworkElement GenerateEditingElement(DataGridCell cell, object dataItem) { throw new NotImplementedException(); } } } All this works as expected, and when during the running of the program the active boolean of conversations changes, this is automatically updated in the DataGrid. However: When there are more entries on the DataGrid then fit at any one time (and vertical scrollbars are added) the behavior with respect to the column for all the conversations is strange. The conversations that are initially loaded are correct, but when I use the scrollbar of the DataGrid conversations that enter the view seems to have a random status (although more inactive than active ones, which corresponds to the actual ratio). When I scroll back up, the active images of the Conversations initially shown (before scrolling) are not correct anymore as well. When I replace my custom DataGridBoundColumn class with (for instance) DataGridCheckBoxColumn it works as intended so my extension of the DataGridBoundColumn class must be incomplete. Personally I think it has something to do with the following: From the MSDN page on the GenerateElement method (http://msdn.microsoft.com/en-us/library/system.windows.controls.datagridcolumn.generateelement%28VS.95%29.aspx): Return Value Type: System.Windows. FrameworkElement A new, read-only element that is bound to the column's Binding property value. I do return a new element (the image) but it is not bound to anything. I am not quite sure what I should do. Should I bind the Image to something? To what exactly? And why? (I have been experimenting, but was unsuccessful thus far, hence this post) Thanks in advance.

    Read the article

  • SubSonic Stored Procedure Issue - Data Generated at Stored Procedure is different from Data Received

    - by ShaShaIn
    Hi All, I am facing a unknown problem while using stored procedure with SubSonic. I have written a stored procedure & application code that takes first name & last name as input parameter and return last login id as ouput parameter. It creates login id as first character of first name & complete last name for no-existing login id otherwise it adds 1 in the last login id e.g. First Name - Mark, Last Name - Waugh, First Login Id - MWaugh, Second Login Id - MWaugh1, Third Login Id - MWaugh2 etc. Stored Procedure SET ANSI_NULLS OFF GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[Users_FetchLoginId] ( @FirstName nvarchar(64), @LastName nvarchar(64), @LoginId nvarchar(256) OUTPUT ) AS DECLARE @UserId nvarchar(256); SET @UserId = NULL; SET @LoginId = NULL; SELECT @UserId = LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName)) IF @@rowcount = 0 OR @UserId IS NULL BEGIN SET @LoginId = (SUBSTRING(@FirstName, 1, 1) + @LastName); print @LoginId RETURN 1; END ELSE BEGIN SELECT TOP 1 LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName + '%')) ORDER BY LoweredUserName DESC RETURN 2; END Application Code public string FetchLoginId(string firstName, string lastName) { SubSonic.StoredProcedure sp = SPs.UsersFetchLoginId( firstName, lastName, null ); sp.Command.AddReturnParameter(); sp.Execute(); if (sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue != System.DBNull.Value) { int returnCode = Convert.ToInt32(sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue, CultureInfo.InvariantCulture); if (returnCode == 1) { // UserName as First Character of First Name & Full Last Name return sp.Command.Parameters[2].ParameterValue.ToString(); } if (returnCode == 2) { DataSet ds = sp.GetDataSet(); if (null == ds || null == ds.Tables[0] || 0 == ds.Tables[0].Rows.Count) return ""; string maxLoginId = ds.Tables[0].Rows[0]["LoweredUserName"].ToString(); string initialLoginId = firstName.Substring(0, 1) + lastName; int maxLoginIdIndex = 0; int initialLoginIdLength = initialLoginId.Length; if (maxLoginId.Substring(initialLoginIdLength).Length == 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix (Here, First Incrementer i.e. 1) return (initialLoginId + maxLoginIdIndex); } if (int.TryParse(maxLoginId.Substring(initialLoginIdLength), out maxLoginIdIndex)) { if (maxLoginIdIndex > 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix return (initialLoginId + maxLoginIdIndex); } } } } Now the problem is for some input (see test data below), the login id created at sql server end correctly but at application subsonic dal side, it truncates some characters. First Name - Jenelia and Last Name - Kanupatikenalaalayampentyalavelugoplansubhramanayam [dbo].[Users_FetchLoginId] - Execute Stored Procedure Separately - Login Id Is Correct JKanupatikenalaalayampentyalavelugoplansubhramanayam public string FetchLoginId(string firstName, string lastName) - Application Code DAL Side - LginId Is Wrongly Received From Stored Procedure JKanupatikenalaalayampentyalavelugoplansubhramanay You can easily see that 2 charactes are removed. If the data is correctly generated by stored procedure then why the characters are removed when data is received in output parameter of stored procedure? Is it due to any internal known or unknown bug of SubSonic? Your help is significant. Thanks in advance...

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • Web service using Data Dynamics ActiveReports occasionally slows down

    - by Swoop
    My company is running into a problem with a web service that is written in C#/ASP.Net. The service receives an identity key for data in SQL Server and a path to generate and save a PDF report for this data. In most cases, this web service returns results to the calling web pages very quickly, usually within a few seconds max. However, it seems to occasionally hit a significant slowdown. The web application calling the web service will generate a timeout error when this slowdown occurs. We have checked and the PDF does get created and saved to the server, so it looks like the web service eventually finishes executing. It seems to take about 1 to 2 minutes for processing to have completed. The PDF is generated using ActiveReports from Data Dynamics. Wwhen this problem occurs, making a small change to the web service's config file (ie, adding a blank space to a connection string line) seems to restart the web service and everything is perfectly ok for a period of time afterwards. Other web applications that are running on the same web server do not seem to experience this type of behavior, only this particular web service. I have added the code for the web service below. It is basic calls to 3rd party libraries. We are not able to recreate this problem in test. I am wondering what might be causing this issue? [WebMethod] public string Publish(int identity, string transactionType, string directory, string filename) { try { AdpConnection Conn = new AdpConnection(ConfigurationManager.AppSettings["myDBConnString"]); AdpCommand Cmd = new AdpCommand("storedproc_GetData", oConn); AdpParameter Param; Cmd.CommandType = CommandType.StoredProcedure; Param = Cmd.CreateParameter("@Identity", DbType.Int32); Param.Value = identity; Cmd.Parameters.Add(oParam); Conn.Open(); string aResponse = Cmd.ExecuteScalar().ToString(); Conn.Close(); if (transactionType == "typeA") { //Parse response DataSet dsResponse = ParseDataResponse(aResponse); //dsResponse.WriteXml(@ConfigurationManager.AppSettings["DocsDir"] + identity.ToString() + ".xml"); DataDynamics.ActiveReports.ActiveReport3 rpt = new DataDynamics.ActiveReports.ActiveReport3(); rpt.LoadLayout(@ConfigurationManager.AppSettings["myReportPath"] + "TypeA.rpx"); rpt.AddNamedItem("ReportPath", @ConfigurationManager.AppSettings["myReportPath"]); rpt.AddNamedItem("XMLSTRING", FormatXML(dsResponse.GetXml())); DataDynamics.ActiveReports.DataSources.XMLDataSource xmlds = new DataDynamics.ActiveReports.DataSources.XMLDataSource(); xmlds.FileURL = null; xmlds.RecordsetPattern = "//DataPatternA"; xmlds.LoadXML(FormatXML(dsResponse.GetXml())); if (!System.IO.Directory.Exists(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\")) { System.IO.Directory.CreateDirectory(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\"); } string sXML = FormatXML(dsResponse.GetXml()); StreamWriter sw = new StreamWriter(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".xml", false); sw.Write(sXML); sw.Close(); rpt.DataSource = xmlds; rpt.Run(true); DataDynamics.ActiveReports.Export.Pdf.PdfExport xPdf = new DataDynamics.ActiveReports.Export.Pdf.PdfExport(); xPdf.Export(rpt.Document, @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"); } } catch(Exception ex) { return "Error: " + ex.ToString(); } return @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"; }

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Web service occasionally slows down significantly

    - by Swoop
    My company is running into a problem with a web service that is written in C#/ASP.Net. The service receives an identity key for data in SQL Server and a path to generate and save a PDF report for this data. In most cases, this web service returns results to the calling web pages very quickly, usually within a few seconds max. However, it seems to occasionally hit a significant slowdown. The web application calling the web service will generate a timeout error when this slowdown occurs. We have checked and the PDF does get created and saved to the server, so it looks like the web service eventually finishes executing. It seems to take about 1 to 2 minutes for processing to have completed. The PDF is generated using ActiveReports from Data Dynamics. Wwhen this problem occurs, making a small change to the web service's config file (ie, adding a blank space to a connection string line) seems to restart the web service and everything is perfectly ok for a period of time afterwards. Other web applications that are running on the same web server do not seem to experience this type of behavior, only this particular web service. I have added the code for the web service below. It is basic calls to 3rd party libraries. We are not able to recreate this problem in test. I am wondering what might be causing this issue? [WebMethod] public string Publish(int identity, string transactionType, string directory, string filename) { try { AdpConnection Conn = new AdpConnection(ConfigurationManager.AppSettings["myDBConnString"]); AdpCommand Cmd = new AdpCommand("storedproc_GetData", oConn); AdpParameter Param; Cmd.CommandType = CommandType.StoredProcedure; Param = Cmd.CreateParameter("@Identity", DbType.Int32); Param.Value = identity; Cmd.Parameters.Add(oParam); Conn.Open(); string aResponse = Cmd.ExecuteScalar().ToString(); Conn.Close(); if (transactionType == "typeA") { //Parse response DataSet dsResponse = ParseDataResponse(aResponse); //dsResponse.WriteXml(@ConfigurationManager.AppSettings["DocsDir"] + identity.ToString() + ".xml"); DataDynamics.ActiveReports.ActiveReport3 rpt = new DataDynamics.ActiveReports.ActiveReport3(); rpt.LoadLayout(@ConfigurationManager.AppSettings["myReportPath"] + "TypeA.rpx"); rpt.AddNamedItem("ReportPath", @ConfigurationManager.AppSettings["myReportPath"]); rpt.AddNamedItem("XMLSTRING", FormatXML(dsResponse.GetXml())); DataDynamics.ActiveReports.DataSources.XMLDataSource xmlds = new DataDynamics.ActiveReports.DataSources.XMLDataSource(); xmlds.FileURL = null; xmlds.RecordsetPattern = "//DataPatternA"; xmlds.LoadXML(FormatXML(dsResponse.GetXml())); if (!System.IO.Directory.Exists(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\")) { System.IO.Directory.CreateDirectory(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\"); } string sXML = FormatXML(dsResponse.GetXml()); StreamWriter sw = new StreamWriter(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".xml", false); sw.Write(sXML); sw.Close(); rpt.DataSource = xmlds; rpt.Run(true); DataDynamics.ActiveReports.Export.Pdf.PdfExport xPdf = new DataDynamics.ActiveReports.Export.Pdf.PdfExport(); xPdf.Export(rpt.Document, @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"); } } catch(Exception ex) { return "Error: " + ex.ToString(); } return @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"; }

    Read the article

  • Rx IObservable buffering to smooth out bursts of events

    - by Dan
    I have an Observable sequence that produces events in rapid bursts (ie: five events one right after another, then a long delay, then another quick burst of events, etc.). I want to smooth out these bursts by inserting a short delay between events. Imagine the following diagram as an example: Raw: --oooo--------------ooooo-----oo----------------ooo| Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o| My current approach is to generate a metronome-like timer via Observable.Interval() that signals when it's ok to pull another event from the raw stream. The problem is that I can't figure out how to then combine that timer with my raw unbuffered observable sequence. IObservable.Zip() is close to doing what I want, but it only works so long as the raw stream is producing events faster than the timer. As soon as there is a significant lull in the raw stream, the timer builds up a series of unwanted events that then immediately pair up with the next burst of events from the raw stream. Ideally, I want an IObservable extension method with the following function signature that produces the bevaior I've outlined above. Now, come to my rescue StackOverflow :) public static IObservable<T> Buffered(this IObservable<T> src, TimeSpan minDelay) PS. I'm brand new to Rx, so my apologies if this is a trivially simple question... 1. Simple yet flawed approach Here's my initial naive and simplistic solution that has quite a few problems: public static IObservable<T> Buffered<T>(this IObservable<T> source, TimeSpan minDelay) { Queue<T> q = new Queue<T>(); source.Subscribe(x => q.Enqueue(x)); return Observable.Interval(minDelay).Where(_ => q.Count > 0).Select(_ => q.Dequeue()); } The first obvious problem with this is that the IDisposable returned by the inner subscription to the raw source is lost and therefore the subscription can't be terminated. Calling Dispose on the IDisposable returned by this method kills the timer, but not the underlying raw event feed that is now needlessly filling the queue with nobody left to pull events from the queue. The second problem is that there's no way for exceptions or end-of-stream notifications to be propogated through from the raw event stream to the buffered stream - they are simply ignored when subscribing to the raw source. And last but not least, now I've got code that wakes up periodically regardless of whether there is actually any work to do, which I'd prefer to avoid in this wonderful new reactive world. 2. Way overly complex appoach To solve the problems encountered in my initial simplistic approach, I wrote a much more complicated function that behaves much like IObservable.Delay() (I used .NET Reflector to read that code and used it as the basis of my function). Unfortunately, a lot of the boilerplate logic such as AnonymousObservable is not publicly accessible outside the system.reactive code, so I had to copy and paste a lot of code. This solution appears to work, but given its complexity, I'm less confident that its bug free. I just can't believe that there isn't a way to accomplish this using some combination of the standard Reactive extensions. I hate feeling like I'm needlessly reinventing the wheel, and the pattern I'm trying to build seems like a fairly standard one.

    Read the article

  • SQL Server 2005 Blocking Problem (ASYNC_NETWORK_IO)

    - by ivankolo
    I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort. The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features: WAIT TYPE = ASYNC_NETWORK_IO The SQL being run is “(@claimid varchar(15))SELECT claimid, enrollid, status, orgclaimid, resubclaimid, primaryclaimid FROM claim WHERE primaryclaimid = @claimid AND primaryclaimid < claimid)”. This is relatively innocuous SQL that should only return one or two records, not a large dataset. NO OTHER SQL statements have been implicated in the blocking, only this SQL statement. This is parameterized SQL for which an execution plan is cached in sys.dm_exec_cached_plans. This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked. HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2. When we trace back to the web server implicated in the blocking, we see the following: There is always some sort of application related error in the Event Log on the web server, linked to the Host ID and Host Process ID from the SQL Session. The error messages vary, usually some sort of SystemOutofMemory. (These error messages seem to be similar to error messages that we have seen in the past without such dramatic consequences. We think was happening before, but didn’t lead to blocking. Why now?) No known problems with the network adapters on either the web servers or the SQL server. (In any event the record set returned by the offending query would be small.) Things ruled out: Indexes are regularly defragmented. Statistics regularly updated. Increased sample size of statistics on claim.primaryclaimid. Forced recompilation of the cached execution plan. Created a compound index with primaryclaimid, claimid. No networking problems. No known issues on the web server. No changes to application software on web servers. We hypothesize that the chain of events goes something like this: Web server process submits SQL above. SQL server executes the SQL, during which it acquires a lock on the claim table. Web server process gets an error and dies. SQL server session is hung waiting for the web server process to read the data set. SQL Server sessions that need to get X locks on parts of the claim table (anyone processing claims) are blocked by the lock on the claim table and remain blocked until they all hit the application time out. Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome. Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only? Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?

    Read the article

  • Why does jquery leak memory so badly?

    - by Thomas Lane
    This is kind of a follow-up to a question I posted last week: http://stackoverflow.com/questions/2429056/simple-jquery-ajax-call-leaks-memory-in-ie I love the jquery syntax and all of its nice features, but I've been having trouble with a page that automatically updates table cells via ajax calls leaking memory. So I created two simple test pages for experimenting. Both pages do an ajax call every .1 seconds. After each successful ajax call, a counter is incremented and the DOM is updated. The script stops after 1000 cycles. One uses jquery for both the ajax call and to update the DOM. The other uses the Yahoo API for the ajax and does a document.getElementById(...).innerHTML to update the DOM. The jquery version leaks memory badly. Running in drip (on XP Home with IE7), it starts at 9MB and finishes at about 48MB, with memory growing linearly the whole time. If I comment out the line that updates the DOM, it still finishes at 32MB, suggesting that even simple DOM updates leak a significant amount of memory. The non-jquery version starts and finishes at about 9MB, regardless of whether it updates the DOM. Does anyone have a good explanation of what is causing jquery to leak so badly? Am I missing something obvious? Is there a circular reference that I'm not aware of? Or does jquery just have some serious memory issues? Here is the source for the leaky (jquery) version: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); </script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { $.ajax({ url: '/html/delme.x', type: 'GET', success: incrementCounter }); } function incrementCounter(data) { if (counter<1000) { counter++; $('#counter').text(counter); setTimeout(leakTest,100); } else $('#counter').text('finished.'); } </script> </head> <body> <div>Why is memory usage going up?</div> <div id="counter"></div> </body> </html> And here is the non-leaky version: <html> <head> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/yahoo/yahoo-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/event/event-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/connection/connection_core-min.js"></script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { YAHOO.util.Connect.asyncRequest('GET', '/html/delme.x', {success:incrementCounter}); } function incrementCounter(o) { if (counter<1000) { counter++; document.getElementById('counter').innerHTML = counter; setTimeout(leakTest,100); } else document.getElementById('counter').innerHTML = 'finished.' } </script> </head> <body> <div>Memory usage is stable, right?</div> <div id="counter"></div> </body> </html>

    Read the article

  • Best way to get back to using the power of lxml after having to use a regex to find something in an

    - by PyNEwbie
    I am trying to rip some text out of a large number of html documents (numbers in the hundreds of thousands). The documents are really forms but they are prepared by a very large group of different organizations so there is significant variation in how they create the document. For example, the documents are divided into chapters. I might want to extract the contents of Chapter 5 from every document so I can analyze the content of the chapter. Initially I thought this would be easy but it turns out that the authors might use a set of non-nested tables throughout the document to hold the content so that Chapter n could be displayed using td tags inside a table. Or they might use other elements such as p tags H tags, div tags or any other block level element. After trying repeatedly to use lxml to help me identify the beginning and end of each chapter I have determined that it is a lot cleaner to use a regular expression because in every case, no matter what the enclosing html element is the chapter label is always in the form of >Chapter # It is a little more complicated in that there might be some white space or non-breaking space represented in different ways (  or   or just spaces). Nonetheless it was trivial to write a regular expression to identify the beginning of each section. (The beginning of one section is the end of the previous section.) But now I want to use lxml to get the text out. My thought is that I have really no choice but to walk along my string to find the close tag for the element that encloses the text I am using to find the relevant section. That is here is one example where the element holding the Chapter name is a div <div style="DISPLAY: block; MARGIN-LEFT: 0pt; TEXT-INDENT: 0pt; MARGIN-RIGHT: 0pt" align="left"><font style="DISPLAY: inline; FONT-WEIGHT: bold; FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman">Chapter 1.&#160;&#160;&#160;Our Beginnings.</font></div> So I am imagining that I would begin at the location where I found the match for chapter 1 and set up a regular expressions to find the next </div|</td|</p|</h1 . . . So at this point I have identified the type of element holding my chapter heading I can use the same logic to find all of the text that is within that element that is set up a regular expression to help me mark from >Chapter 1.&#160;&#160;&#160;Our Beginnings.< So I have identified where my Chapter 1 begins I can do the same for chapter 2 (which is where Chapter 1 ends) Now I am imagining that I am going to snip the document beginning at the opening of the element that I identified as the element the indicates where chapter 1 begins and ending just before the opening of the element that I identified as the element that indicates where Chapter 2 begins. The string that I have identified will then be fed to lxml to use its power to get the content. I am going to all of this trouble because I have read over and over - never use a regular expression to extract content from html documents and I have not hit on a way to be as accurate with lxml to identify the starting and ending locations for the text I want to extract. For example, I can never be certain that the subtitle of Chapter 1 is Our Beginnings it could be Our Red Canary. Let me say that I spent two solid days trying with lxml to be confident that I had the beginning and ending elements and I could only be accurate <60% of the time but a very short regular expression has given me better than 95% success. I have a tendency to make things more complicated than necessary so I am wondering if anyone has seen or solved a similar problems and if they had an approach (not the details mind you) that they would like to offer.

    Read the article

  • Rendering a Long Document on iPad

    - by benjismith
    I'm implementing a document viewer with highlighting/annotation capabilities for a custom document format on iPad. The documents are kind of long (100 to 200 pages, if printed on paper) and I've had a hard time finding the right approach. Here are the requirments: 1) Basic rich-text styling: control of left/right margins. Control of font name, size, foreground/background color, and line spacing. Bold, italics, underline, etc. 2) Selection and highlighting of arbitrary text regions (not limited to paragraph boundaries, like in Safari/UIWebView). 3) Customization of the Cut/Copy/Paste popup (what is that thing called anyhow? UIActionBar?) This is one of the essential requirements of the app. My first implementation was based on UIWebView. I just rendered the document as HTML with CSS for text styling. But I couldn't get the kind of text selection behavior I wanted (across paragraph boundaries) and the UIActionBar can't be customized from within UIWebView. So I started working on a javascript approach, faking the device text-selection behavior using JQuery to trap touch events and dynamically modifying the DOM to change the background color of selected regions of text. I built a fake UIActionBar control as a hidden DIV, positioning it and unhiding it whenever there was an active selection region. Not too shabby. The main problem is that it's SLOOOOOOOW. Scrolling through the document is nice and quick, but dynamically changing the DOM is not very snappy. Plus, I couldn't figure out how to recreate the magnifier loupe, so my fake text-selection GUI doesn't look quite the same as the native implementation. Also, I haven't yet implemented the communication bridge between the javascript layer and the objective-c layer (where the rest of the app lives), but it was shaping up to be a huge hassle. So I've been looking at CoreText, but there are precious few examples on the web. I spent a little time with this simple little demo: http://github.com/jonasschnelli/I7CoreTextExample/ It shows how to use CoreText to draw an NSAttributedText string into a UIView. But it has its own problems: It doesn't implement text-selection behavior, and it doesn't present a UIActionBar, so I don't have any idea how to make that happen. And, more importantly, it tries to draw the entire document all at once, with significant performance degradations for long documents. My documents can have thousands of paragraphs, and less than 1% of the document is ever on screen at a time. On the plus side, these documents already contain precise formatting information. I know the exact page-position of every line of text, so I don't need a layout engine. Does anyone know how to implement this sort of view using CoreText? I understand that a full-fledged implementation is overkill for a question like this, but I'm looking for a good CoreText example with a few basic requirements: 1) Precise layout & formatting control (using the formatting metrics and text styles I've already calculated). 2) Arbitrary selection of text. 3) Customization of the UIActionBar. 4) Efficient recycling of resources for off-screen objects. I'd be happy to implement my own recycling when text elements scroll off-screen, but wouldn't that require re-implementing UIScrollView? I'm brand-new to iPhone development, and still getting used to Objective-C, but I've been working in other languages (Java, C#, flex/actionscript, etc) for more than ten years, so I feel confident in my ability to get the work done, if only I had a better feel for the iPhone SDK and the common coding patterns for stuff like this. Is it just me, or does the SDK documentation really suck? Anyhow, thanks for your help!

    Read the article

  • Why might a System.String object not cache its hash code?

    - by Dan Tao
    A glance at the source code for string.GetHashCode using Reflector reveals the following (for mscorlib.dll version 4.0): public override unsafe int GetHashCode() { fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; for (int i = this.Length; i > 0; i -= 4) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; if (i <= 2) { break; } num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; } return (num + (num2 * 0x5d588b65)); } } Now, I realize that the implementation of GetHashCode is not specified and is implementation-dependent, so the question "is GetHashCode implemented in the form of X or Y?" is not really answerable. I'm just curious about a few things: If Reflector has disassembled the DLL correctly and this is the implementation of GetHashCode (in my environment), am I correct in interpreting this code to indicate that a string object, based on this particular implementation, would not cache its hash code? Assuming the answer is yes, why would this be? It seems to me that the memory cost would be minimal (one more 32-bit integer, a drop in the pond compared to the size of the string itself) whereas the savings would be significant, especially in cases where, e.g., strings are used as keys in a hashtable-based collection like a Dictionary<string, [...]>. And since the string class is immutable, it isn't like the value returned by GetHashCode will ever even change. What could I be missing? UPDATE: In response to Andras Zoltan's closing remark: There's also the point made in Tim's answer(+1 there). If he's right, and I think he is, then there's no guarantee that a string is actually immutable after construction, therefore to cache the result would be wrong. Whoa, whoa there! This is an interesting point to make (and yes it's very true), but I really doubt that this was taken into consideration in the implementation of GetHashCode. The statement "therefore to cache the result would be wrong" implies to me that the framework's attitude regarding strings is "Well, they're supposed to be immutable, but really if developers want to get sneaky they're mutable so we'll treat them as such." This is definitely not how the framework views strings. It fully relies on their immutability in so many ways (interning of string literals, assignment of all zero-length strings to string.Empty, etc.) that, basically, if you mutate a string, you're writing code whose behavior is entirely undefined and unpredictable. I guess my point is that for the author(s) of this implementation to worry, "What if this string instance is modified between calls, even though the class as it is publicly exposed is immutable?" would be like for someone planning a casual outdoor BBQ to think to him-/herself, "What if someone brings an atomic bomb to the party?" Look, if someone brings an atom bomb, party's over.

    Read the article

  • "Could not establish secure channel for SSL/TLS" in .NET CF application on smart phone

    - by Stefan Mohr
    I have a stubborn communications issue with an application running on the .NET Compact Framework 3.5 on Windows Mobile smartphones. I am constructing a web request using this code: UTF8Encoding encoding = new System.Text.UTF8Encoding(); byte[] Data = encoding.GetBytes(HttpUtility.ConstructQueryString(parameters)); httpRequest = WebRequest.Create((domain)) as HttpWebRequest; httpRequest.Timeout = 10000000; httpRequest.ReadWriteTimeout = 10000000; httpRequest.Credentials = CredentialCache.DefaultCredentials; httpRequest.Method = "POST"; httpRequest.ContentType = "application/x-www-form-urlencoded"; httpRequest.ContentLength = Data.Length; Stream SendReq = httpRequest.GetRequestStream(); SendReq.Write(Data, 0, Data.Length); SendReq.Close(); HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); return httpResponse.GetResponseStream(); The web service functions by receiving a JSON-encoded document as part of the URL (eg. https://site.com/ws/sync??document={"version":"1.0.0","items":[{"item_1":"item1"}]}&user=usr&password=pw), and as a response receives another JSON document as response data. This code runs fine on all emulators and PDAs running WM 5 and 6. We have seen an issue with a couple of customers running Treo smartphones (and only on the Sprint network). We have tested the code on an identical device on the AT&T network (via DeviceAnywhere) and once again the code worked as we expected. This has to be some sort of security policy on the phone, but we've been unable to determine a workaround or diagnose it thoroughly as we cannot reproduce it in house and have had to resort to getting users to assist with running test drivers for us. When this code executes, the user's device throws the following exception: System.Net.WebException Could not establish secure channel for SSL/TLS Stack trace: at System.Net.HttpWebRequest.finishGetRequestStream() at System.Net.HttpWebRequest.GetRequestStream() at OurApp.GetResponseStream(String domain, Hashtable parameters) inner exception: System.IO.IOException Authentication failed because the remote party has closed the transport stream. Stack trace: at System.Net.SslConnectionState.ClientSideHandshake() at System.Net.SslConnectionState.PerformClientHandShake() at System.Net.Connection.connect(Object ignored) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() Examining the server Apache logs shows no hits from the user's IP - I don't think the device is even attempting to send a packet before failing. If relevant, the server is running Apache on Linux and is written using the TurboGears Python framework. The server certificate is issued by a CA and is still valid. The test driver where this error was copied from was not code signed, however the same error (without the error messages) is signed with a GeoTrust certificate so we don't believe this is a code signing issue. The application installs and launches without issue on all phones - it's just establishing this SSL connection that is breaking for these users. One significant issue in troubleshooting is that there is a substantial inconvenience each time we try out a solution (need to find a "volunteer" customer), so we're really looking for a silver bullet or a better understanding of the handshaking process so we can be reasonably confident we only need to ask the user to test it one or two more times. One final mention: we have tried the sync both over ActiveSync and also over GPRS with identical results. Any thoughts would be greatly appreciated!

    Read the article

  • null pointer exception comparing two strings in java.

    - by David
    I got this error message and I'm not quite sure whats wrong: Exception in thread "main" java.lang.NullPointerException at Risk.runTeams(Risk.java:384) at Risk.blobRunner(Risk.java:220) at Risk.genRunner(Risk.java:207) at Risk.main(Risk.java:176) Here is the relevant bits of code (i will draw attention to the line numbers within the error message via comments in the code as well as inputs i put into the program while its running where relevant) public class Risk { ... public static void main (String[]arg) { String CPUcolor = CPUcolor () ; genRunner (CPUcolor) ; //line 176 ... } ... public static void genRunner (String CPUcolor) // when this method runs i select 0 and run blob since its my only option. Theres nothing wrong with this method so long as i know, this is only significant because it takes me to blob runner and because another one of our relelvent line numbers apears. { String[] strats = new String[1] ; strats[0] = "0 - Blob" ; int s = chooseStrat (strats) ; if (s == 0) blobRunner (CPUcolor) ; // this is line 207 } ... public static void blobRunner (String CPUcolor) { System.out.println ("blob Runner") ; int turn = 0 ; boolean gameOver = false ; Dice other = new Dice ("other") ; Dice a1 = new Dice ("a1") ; Dice a2 = new Dice ("a2") ; Dice a3 = new Dice ("a3") ; Dice d1 = new Dice ("d1") ; Dice d2 = new Dice ("d2") ; space (5) ; Territory[] board = makeBoard() ; IdiceRoll (other) ; String[] colors = runTeams(CPUcolor) ; //this is line 220 Card[] deck = Card.createDeck () ; System.out.println (StratUtil.canTurnIn (deck)) ; while (gameOver == false) { idler (deck) ; board = assignTerri (board, colors) ; checkBoard (board, colors) ; } } ... public static String[] runTeams (String CPUcolor) { boolean z = false ; String[] a = new String[6] ; while (z == false) { a = assignTeams () ; printOrder (a) ; boolean CPU = false ; for (int i = 0; i<a.length; i++) { if (a[i].equals(CPUcolor)) CPU = true ; //this is line 384 } if (CPU==false) { System.out.println ("ERROR YOU NEED TO INCLUDE THE COLOR OF THE CPU IN THE TURN ORDER") ; runTeams (CPUcolor) ; } System.out.println ("is this turn order correct? (Y/N)") ; String s = getIns () ; while (!((s.equals ("y")) || (s.equals ("Y")) || (s.equals ("n")) || (s.equals ("N")))) { System.out.println ("try again") ; s = getIns () ; } if (s.equals ("y") || s.equals ("Y") ) z = true ; } return a ; } ... } // This } closes the class The reason i don't think i should be getting a Null:pointerException is because in this line: a[i].equals(CPUcolor) a at index i holds a string and CPUcolor is a string. Both at this point definatly have a value neither is null. Can anyone please tell me whats going wrong?

    Read the article

  • Problem with the output of Jquery function .offset in IE

    - by vandalk
    Hello! I'm new to jquery and javascript, and to web site developing overall, and I'm having a problem with the .offset function. I have the following code working fine on chrome and FF but not working on IE: $(document).keydown(function(k){ var keycode=k.which; var posk=$('html').offset(); var centeryk=screen.availHeight*0.4; var centerxk=screen.availWidth*0.4; $("span").text(k.which+","+posk.top+","+posk.left); if (keycode==37){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left-centerxk}) }; if (keycode==38){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top-centeryk}) }; if (keycode==39){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left+centerxk}) }; if (keycode==40){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top+centeryk}) }; }); hat I want it to do is to scroll the window a set percentage using the arrow keys, so my thought was to find the current coordinates of the top left corner of the document and add a percentage relative to the user screen to it and animate the scroll so that the content don't jump and the user looses focus from where he was. The $("span").text are just so I know what's happening and will be turned into comments when the code is complete. So here is what happens, on Chrome and Firefox the output of the $("span").text for the position variables is correct, starting at 0,0 and always showing how much of the content was scrolled in coordinates, but on IE it starts on -2,-2 and never gets out of it, even if I manually scroll the window until the end of it and try using the right arrow key it will still return the initial value of -2,-2 and scroll back to the beggining. I tried substituting the offset for document.body.scrollLetf and scrollTop but the result is the same, only this time the coordinates are 0,0. Am I doing something wrong? Or is this some IE bug? Is there a way around it or some other function I can use and achieve the same results? On another note, I did other two navigating options for the user in this section of the site, one is to click and drag anywhere on the screen to move it: $("html").mousedown(function(e) { var initx=e.pageX var inity=e.pageY $(document).mousemove(function(n) { var x_inc= initx-n.pageX; var y_inc= inity-n.pageY; window.scrollBy(x_inc*0.7,y_inc*0.7); initx=n.pageX; inity=n.pageY //$("span").text(initx+ "," +inity+ "," +x_inc+ "," +y_inc+ "," +e.pageX+ "," +e.pageY+ "," +n.pageX+ "," +n.pageY); // cancel out any text selections document.body.focus(); // prevent text selection in IE document.onselectstart = function () { return false; }; // prevent IE from trying to drag an image document.ondragstart = function() { return false; }; // prevent text selection (except IE) return false; }); }); $("html").mouseup(function() { $(document).unbind('mousemove'); }); The only part of this code I didn't write was the preventing text selection lines, these ones I found in a tutorial about clicking and draging objects, anyway, this code works fine on Chrome, FireFox and IE, though on Firefox and IE it's more often to happen some moviment glitches while you drag, sometimes it seems the "scrolling" is a litlle jagged, it's only a visual thing and not that much significant but if there's a way to prevent it I would like to know.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >