Search Results

Search found 8687 results on 348 pages for 'per ersson'.

Page 52/348 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Detection of negative integers using bit operations

    - by Nawaz
    One approach to check if a given integer is negative or not, could be this: (using bit operations) int num_bits = sizeof(int) * 8; //assuming 8 bits per byte! int sign_bit = given_int & (1 << (num_bits-1)); //sign_bit is either 1 or 0 if ( sign_bit ) { cout << "given integer is negative"<<endl; } else { cout << "given integer is positive"<<endl; } The problem with this solution is that number of bits per byte couldn't be 8, it could be 9,10, 11 even 16 or 40 bits per byte. Byte doesn't necessarily mean 8 bits! Anyway, this problem can be easily fixed by writing, //CHAR_BIT is defined in limits.h int num_bits = sizeof(int) * CHAR_BIT; //no assumption. It seems fine now. But is it really? Is this Standard conformant? What if the negative integer is not represented as 2's complement? What if it's representation in a binary numeration system that doesn't necessitate only negative integers to have 1 in it's most significant bit? Can we write such code that will be both portable and standard conformant? Related topics: Size of Primitive data types Why is a boolean 1 byte and not 1 bit of size?

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • Google Maps - custom icons with infoWindows

    - by hfidgen
    Hiya, As far as I can tell, this code is fine, and should display some custom icons with popup HTML windows. But the popups aren't working! Can anyone point out what I'm doing wrong? I can't seem to debug it myself. Thanks! function initialize() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(51.410416, -0.293884), 15); map.addControl(new GSmallMapControl()); map.addControl(new GMapTypeControl()); var i_parking = new GIcon(); i_parking.image = "http://google-maps-icons.googlecode.com/files/parking.png"; i_parking.iconSize = new GSize(32, 37); i_parking.iconAnchor = new GPoint(16, 37); icon_parking = { icon:i_parking }; var marker_office = new GMarker(new GLatLng(51.410416,-0.293884)); var marker_parking1 = new GMarker((new GLatLng(51.410178,-0.292000)),icon_parking); var marker_parking2 = new GMarker((new GLatLng(51.410152,-0.298948)),icon_parking); marker_parking1.openInfoWindowHtml('<strong>On Street Parking</strong><br>Church Road - 40p per hour'); marker_parking2.openInfoWindowHtml('<strong>Multi Storey - Fairfield</strong><br>Upper Car Park - 90p per half hour<br>Lower Car Park - £1.20 per hour'); map.addOverlay(marker_office); map.addOverlay(marker_parking1); map.addOverlay(marker_parking2); } }

    Read the article

  • Avoiding seasonality assumption for stl() or decompose() in R

    - by user303922
    Hello everybody, I have high frequency commodity price data that I need to analyze. My objective is to not assume any seasonal component and just identify a trend. Here is where I run into problems with R. There are two main functions that I know of to analyze this time series: decompose() and stl(). The problem is that they both take a ts object type with a frequency parameter greater than or equal to 2. Is there some way I can assume a frequency of 1 per unit time and still analyze this time series using R? I'm afraid that if I assume frequency greater than 1 per unit time, and seasonality is calculated using the frequency parameter, then my forecasts are going to depend on that assumption. names(crude.data)=c('Date','Time','Price') names(crude.data) freq = 2 win.graph() plot(crude.data$Time,crude.data$Price, type="l") crude.data$Price = ts(crude.data$Price,frequency=freq) I want frequency to be 1 per unit time but then decompose() and stl() don't work! dim(crude.data$Price) decom = decompose(crude.data$Price) win.graph() plot(decom$random[2:200],type="line") acf(decom$random[freq:length(decom$random-freq)]) Thank you.

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • Entity Framework DateTime update extremely slow

    - by Phyxion
    I have this situation currently with Entity Framework: using (TestEntities dataContext = DataContext) { UserSession session = dataContext.UserSessions.FirstOrDefault(userSession => userSession.Id == SessionId); if (session != null) { session.LastAvailableDate = DateTime.Now; dataContext.SaveChanges(); } } This is all working perfect, except for the fact that it is terribly slow compared to what I expect (14 calls per second, tested with 100 iterations). When I update this record manually through this command: dataContext.Database.ExecuteSqlCommand(String.Format("update UserSession set LastAvailableDate = '{0}' where Id = '{1}'", DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fffffff"), SessionId)); I get 55 calls per second, which is more than fast enough. However, when I don't update the session.LastAvailableDate but I update an integer (e.g. session.UserId) or string with Entity Framework, I get 50 calls per second, which is also more than fast enough. Only the datetime field is terrible slow. The difference of a factor 4 is unacceptable and I was wondering how I can improve this as I don't prefer using direct SQL when I can also use the Entity Framework. I'm using Entity Framework 4.3.1 (also tried 4.1).

    Read the article

  • Why is short project lifetime and other situation-specific reasons used to excuse crappy code? [clos

    - by sharptooth
    Every now and then (including on SO) people say things implying that "if the project is short lived you can leave obvious defects there" or "that memory leak only accounts for 100 bytes per whole program lifetime and could be left". Now in my practice I always reuse company-owned code to the greatest extent I can. Like if I need something and I can find it in the company codebase I take it from there and reuse or adapt. This means that any crappy code will be reused as well and I might notice or not notice defects therein. So the defect in some "test we only need for a month" can slip into a proram we ship to customers. And a leak that "only accounted for 100 bytes per lifetime" now could account for 100 bytes 10 times per second in a server application intended to run for months. That's why I don't understand why excuses like that are offered. Is our compamy the only one having a source control? Or are we the only company that requires writing human-readable code? Could anyone shed a light on why people seriously offer such excuses?

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • How can I pivot these key+values rows into a table of complete entries?

    - by CodexArcanum
    Maybe I demand too much from SQL but I feel like this should be possible. I start with a list of key-value pairs, like this: '0:First, 1:Second, 2:Third, 3:Fourth' etc. I can split this up pretty easily with a two-step parse that gets me a table like: EntryNumber PairNumber Item 0 0 0 1 0 First 2 1 1 3 1 Second etc. Now, in the simple case of splitting the pairs into a pair of columns, it's fairly easy. I'm interested in the more advanced case where I might have multiple values per entry, like: '0:First:Fishing, 1:Second:Camping, 2:Third:Hiking' and such. In that generic case, I'd like to find a way to take my 3-column result table and somehow pivot it to have one row per entry and one column per value-part. So I want to turn this: EntryNumber PairNumber Item 0 0 0 1 0 First 2 0 Fishing 3 1 1 4 1 Second 5 1 Camping Into this: Entry [1] [2] [3] 0 0 First Fishing 1 1 Second Camping Is that just too much for SQL to handle, or is there a way? Pivots (even tricky dynamic pivots) seem like an answer, but I can't figure how to get that to work.

    Read the article

  • How to delete ProgIDs from other user accounts when uninstalling from Windows?

    - by Mordachai
    I've been investigating "how should a modern windows c++ application register its file types" with Windows (see http://stackoverflow.com/questions/2828637/c-how-do-i-correctly-register-and-unregister-file-type-associations-for-our-ap). And having combed through the various MSDN articles on the subject, the summary appears to be as follows: The installer (elevated) should register the global ProgID HKLM\Software\Classes\my-app.my-doc[.version] (e.g. HKLM\Software\Classes\TextPad.text) The installer also configures default associations for its document types (e.g. .myext) and points this to the aforementioned global ProgID in HKLM. NOTE: a user interface should be provided here to allow the user to either accept all default associations, or to customize which associations should be set. The application, running standard (unelevated), should provide a UI for allowing the current user to set their personal associations as is available in the installer, except that these associations are stored in HKCU\Software\Classes (per user, not per machine). The UN-installer is then responsible for deleting all registered ProgIDs (but should leave the actual file associations alone, as Windows is smart enough to handle associations pointing to missing ProgIDs, and this is the specified desired behavior by MSDN). So that schema sounds reasonable to me, except when I consider #4: How does an uninstaller, running elevated for a given user account, delete any per-user ProgIDs created in step #3 for other users? As I understand things, even in elevated mode, an uninstaller cannot go into another user's registry hive and delete items? Or can it? Does it have to load each given user hive first? What are the rules here? Thanks for any insight you might have to offer! EDIT: See below for the solution (My question was founded in confusion)

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • How do I choose what and when to cache data with ob_start rather than query the database?

    - by Tim Santeford
    I have a home page that has several independent dynamic parts. The parts consist of a list of recent news from the company, a site statistics panel, and the online status of certain employees. The recent news changes monthly, site statistics change daily, and online statuses change on a per minute bases. I would like to cache these panels so that the db is not hit on every page load. Is using ob_start() then ob_get_contents() to cache these parts to a file the correct way to do this or is there a better method in PHP5 for doing this? In asking this question I'm trying to answer these additional questions: How can I determine the correct approach for caching this data without doing extensive benchmarking? Does it make sense to cache these parts in different files and then join them together per requests or should I re-query the data and cache once per minute? I'm looking for a rule of thumb for planning pages and for situations where doing testing is not cost effective (The client is not paying enough for it I mean). Thanks!

    Read the article

  • Batch file recursively find files and rar them

    - by b1gf00t
    Hi there, I have a Parent Directory which hosts many sub directories, and in every sub directory there is .mpg movies. Some of the directories might contain one or more .mpg movies. I would like to automate the process below, which I have been doing manually. Step One If the directory has more than 1 .mpg file, I create separates directories for each and move each file into its directory, naming the directory as per the name of the file. Step Two I rar each video file in its directory as per one of my profiles, by that it splits the movie into 50MB parts, test the archive, delete the source, and instructs winrar to wait if another rar is executing. I am doing this so I can queue jobs manually. Step Three After having all the rars in the sub directories, I start creating a checksum for every directory, therefore leaving checksum.sfv in every directory. Step Four I copy the parent folder and its sub directories to my external drives. I was hoping that someone could assist me in creating a script. I was able to automate the process of creating directories as per the name of the file, and moving the file. However, I never succeeded in automating Step two. I am using the below software Winrar from rarlabs exf from exactfile Appreciate your assistance.

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Long story short, here is what it says now: Existing Windows 2000 Server, type:built-in [no licenses used, I add for for sake of being complete] Windows Server 2003 - Terminal Server Per Device CAL Token, type:open [none of 10 used] Windows Server 2003 - TS Per Device CAL, type:open [3 of 10 used] As I tried to explain, this is the end result after a few changes, most of which I can't directly connect to any action from my part. Only going to the activation procedure by phone seemed to directly effect the TS, resulting in the above configuration. Still, it is impossible to connect with more than 3 people, which is 1 up from the 2 that could connect yesterday. TS does say 7 licenses are avaible. Yet it won't give them out.

    Read the article

  • Enable VT-x in HP 8300 elite

    - by lang2
    I have a HP 8300 elite (Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz). I'm trying to run a virtual machine via VirtualBox. But every time I start the VM, it says: VT-x is disabled in the BIOS. (VERR_VMX_MSR_VMXON_DISABLED). My lscpu output is like this: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 58 Stepping: 9 CPU MHz: 1600.000 BogoMIPS: 6784.74 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0-7 I went into the BIOS but the things the can be tweaked is very limited and I couldn't find the VT-x setting. Anybody know how to do this in this setup?

    Read the article

  • GET /wpad.dat entries flooding my access_log

    - by Aas
    I have a small LAN of some 30 users in it with proxy auto configuration enabled and working. Two of them are requesting wpad.dat file too rapidly at a pace of 30 times per second. 10.1.14.246 - - [02/Jun/2014:09:07:18 +0200] "GET /wpad.dat HTTP/1.1" 302 302 10.1.14.141 - - [02/Jun/2014:09:07:18 +0200] "GET /wpad.dat HTTP/1.1" 302 302 I don't know whether this is a problem from performance perspective, (the server is powerful enough to handle this) but the problem is it clogs up the access_log file. It grows about 1GB per week. Clients run Win7Pro. What could cause this behavior? What can be done to stop it? I have shortened log rotate window as a temporary workaround to prevent /var fill up. Thanks beforehand for your support.

    Read the article

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • Can Windows log CryptoAPI CRL timouts?

    - by makerofthings7
    We have several .NET applications that occasionally "act slow" with no CPU or disk access. I suspect that they are hung up on authentication when trying to validate the certificate, since the timeout is almost 20 seconds. As per this MSFT article Most applications do not specify to CryptoAPI to use a cumulative time-out. If the cumulative time-out option is not enabled, CryptoAPI uses the CryptoAPI default setting which is a time-out of 15 seconds per URL. If the cumulative time-out option specified by the application, then CryptoAPI will use a default setting of 20 seconds as the cumulative timeout. The first URL receives a maximum timeout of 10 seconds. Each subsequent URL timeout is half of the remaining balance in the cumulative timeout value. Since this is a service, how can I detect and log CryptoAPI hangs for applications I have sourcecode to, and also 3rd party

    Read the article

  • Can I speed up cygwin's fork?

    - by Andrew Aylett
    I came across a post discussing the speed of forking in Cygwin, giving an expected 'fork rate' in Windows XP of around 30-50 per-second (link) I've got a Core 2 duo (1.79GHz) which I would expect to get comparable results, but it's only managing around 8 forks per second (and sometimes a lot fewer): $ while (true); do date --utc; done | uniq -c 5 Wed Apr 21 12:38:10 UTC 2010 6 Wed Apr 21 12:38:11 UTC 2010 1 Wed Apr 21 12:38:12 UTC 2010 1 Wed Apr 21 12:38:13 UTC 2010 8 Wed Apr 21 12:38:14 UTC 2010 8 Wed Apr 21 12:38:15 UTC 2010 6 Wed Apr 21 12:38:16 UTC 2010 1 Wed Apr 21 12:38:18 UTC 2010 9 Wed Apr 21 12:38:19 UTC 2010 Can you suggest anything I might be able to do to speed things up? This machine acts a lot slower in Cygwin than others I've used before which actually were a lot slower.

    Read the article

  • Gauge network traffic for each Citrix session

    - by molecule
    Hi all, We are currently reviewing the bandwidth of our WAN links. How much bandwidth does a "typical" Citrix session utilize over a WAN link? JFYI - we are using Citrix Program Neighborhood V10 and each session is configured to use 256 colors. I have set up PRTG and it appears that for a server hosting 4 users, the traffic is approximately 100k to 300k per session. Is that about right? If you had to set a benchmark on a per-user basis, how much bandwidth would you assign to each user? Thanks in advance.

    Read the article

  • What does the "Max Memory Size" on the new Intel Core i3 / i5 / i7 CPU's mean?

    - by Josh
    I just noticed in the specs of the new Intel Core i-series processors that there is a "Max Memory Size" that is usually pretty small -- anywhere from 8GB to 24GB. See here: http://ark.intel.com/Product.aspx?id=41316 Core 2-based motherboards were just starting to roll out support for 32GB and greater memory sizes. Anyone have any idea what the Max Memory Size indicates? Is this the total limitation of the on-chip memory controller? Limitation per channel? Limitation per stick (e.g. density??)? Thinking of building a decent machine that needs lots of RAM, so I'm looking at the i7 860.

    Read the article

  • long access times and errors in iis application

    - by user55862
    I am having an issue with an IIS application (details of environment at the end of the message). The web site works great most of the time and I cannot reproduce any error in our test system. On the live system however with on averare of 5-15 requests per second I have a problem with that some requests (about 0.05%) will take over 300 seconds to complete. The other requests complete withing 5-10 seconds. It seem like if all the errornous requests end up with a Timer_EntityBody error in the error log. I have never seen this as an end user but I guess that they will receive some kind of error message. I am trying to find out what can be causing this errornous behaviour. Any ideas are welcome. I have read something about that there can be an MTU issue if ICMP and MTU protocols are blocked in the firewall. Does that sound reasonable? I have also read about updating to IIS 7 should do the trick. Does it sound reasonable? I think that the problem has another cause but I have no idea of what. I have tried running hte perormance monitor, monitoring for database locks and active transaction counts. I can see some of these in the perfmon log for the MSSQL server (another machine) for example: Active transactions is sometimes peaking and sometimes for long periods Lock waits per seconds is sometimes peaking Transactions per second is sometimes peaking Page IO Latch wait is sometimes peaking Lock wait time (ms) is sometimes peaking But I cannot see that any of these correlate to the errors in the IIS error log. On the IIS server machine I can also see with perfmon that some values peak a few times during a day: Request execution time Avg disk queue length I can neither see that any of these correlate to the errors in the IIS error log. In the below code I have anonymized by replacing some parts with HIDDEN The following can be seen in the access log 2010-10-01 08:35:05 W3SVC1301873091 **HIDDEN** POST /**HIDDEN**/Modules/BalanceModule.aspx - 80 - **HIDDEN** Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) ASP.NET_SessionId=**HIDDEN** 400 0 64 0 2241 127799 At the same time the following can be seen in the error log: 2010-10-01 08:35:05 **HIDDEN** 1999 **HIDDEN** 80 HTTP/1.0 POST /**HIDDEN**/Modules/BalanceModule.aspx - 1301873091 Timer_EntityBody Test+Pool I can tell the following about the environment: Server: Windows Server 2003 x64 SP2 running on VMWare HTTP Server: IIS v6.0 with ASP.NET 2.0.50727 Antivirus: Trend Micro OfficeScan (Is it a good idea to have this on a server?)

    Read the article

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (~2-3k requests per second, running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96

    Read the article

  • How to monitor current output/receive queue length in Linux

    - by IZhen
    I want to check the capacity and performance of my network. Besides checking the txkB/s and rxkB/s via Sar, I'd also like to see the average queue length of the network interface(so that the average queueing time in the interface can be calculated). It seems that netstat can give a per socket queue length, is it possible to get a per interface statics(a bit like Network Interface\Output Queue Length in Windows)? A related and kind of reverse questions is How do I view the TCP Send and Receive Queue sizes on Windows? Thanks

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >