Search Results

Search found 7960 results on 319 pages for 'distributed cache'.

Page 288/319 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • High load on X3220 Quad Core Linux Apache server

    - by John Templar
    I'm seriously in need of help. My sites are now nearly impossible to use because of massive loads on my server. I'm already a month late on my mortgage and this really isn't helping my situation. I've been working on fixing this intermittent load problem for months (never this bad). I'm suspecting some kind of attack since I'm under DDOS attack a lot! I've been trying to figure out what is causing the load but I'm afraid I just don't have the experience or knowledge to understand all the data I've been looking at. I don't even know where to begin or how to test for the large array of attacks out there. Here's some data you might find useful... Server: Xeon X3220 Quad Core 2.4 GHz - Linux, FreeBSD 500 GB HD and 8 Gig of Ram. Runs Centos release 5.7 Server Version: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_qos/9.74 Warning: All sites are softcore adult sites - mostly fantasy art like elves and amazons. 1) Sites may run fine for weeks or just days at less than 10 load then start jumping to 40-80 load - no idea why. Same sites, same mods, same amount of traffic - just WHAM! 2) I get an email almost every day that says: "Large Number of Failed Login Attempts from IP (different each time)". My webhost (who almost never helps me) told me it was a udp flood or something. 3) I've changed the port for MySQL from the default. If I ever put it back to the default - I get Loads of over 100 from what must be a constant mysql port flood. 4) I've reconfigured MYSQL. Link: http://www.deadlyamazons.com/logs/mycnf.txt 5) I have 3 Joomla Jomsocial networks. I've spent a couple weeks turning all the mods/plugins off, waiting a day and then turning them back on the next day or later if there isn't any change (there hasn't been). For example, on Thursday I'll turn off videos, on Friday I'll turn off chat.. etc and nothing changes the load appreciably. 6) Joomla info: All SEF turned off - sh404sef completely disabled and removed. Components: Joomla 1.5.22, Jomsocial 2.0.5, Kunena 1/31/2011, HWDMediashare 11/22/2010 and JBolo Chat 2.7.3, Comet Chat or Envolve Chat. Page Compression is on, Cache is on 15 mins. Please click on this forum to see links to all my reports: http://forum.joomla.org/viewtopic.php?f=433&t=706035&p=2777500#p2777500 Any help would be highly appreciated.

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    Hi all, I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • resolve access violation exception (0xC0000005) crashing IIS app pool

    - by Joseph
    IIS 7.5, server 2008 r2, classic asp and asp .net 2.0, 3.5 website same server, same app pool. The past 4 weeks thousands of these errors 'C0000005' are occurring. I know from IIS debug diag tool that 'C0000005' is an access violation error. Below is the top line from my debug diag report. In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 Thread 29 - System ID 6736 Entry point 0x00000000 Create time 1/8/2011 12:46:26 AM Time spent in user mode 0 Days 00:00:00.140 Time spent in kernel mode 0 Days 00:00:00.078 Function Source asp!CActiveScriptEngine::GetApplication+27 vbscript!COleScript::GetDebugApplicationCoreNoRef+2b vbscript!COleScript::FDebuggerEnabled+30 vbscript!COleScript::SetScriptSite+cd asp!CActiveScriptEngine::Init+125 asp!CScriptManager::GetEngine+252 asp!AllocAndLoadEngines+28f asp!ExecuteGlobal+17a asp!Execute+b5 asp!CHitObj::ViperAsyncCallback+3fc asp!CViperAsyncRequest::OnCall+6a comsvcs!CSTAActivityWork::STAActivityWorkHelper+32 ole32!EnterForCallback+f4 ole32!SwitchForCallback+1a8 ole32!PerformCallback+a3 ole32!CObjectContext::InternalContextCallback+15b ole32!CObjectContext::DoCallback+1c comsvcs!CSTAActivityWork::DoWork+12f comsvcs!CSTAThread::DoWork+18 comsvcs!CSTAThread::ProcessQueueWork+37 comsvcs!CSTAThread::WorkerLoop+135 msvcrt!_endthreadex+44 msvcrt!_endthreadex+ce kernel32!BaseThreadInitThunk+e ntdll!__RtlUserThreadStart+70 ntdll!_RtlUserThreadStart+1b BELOW is the faulting module. ASP report Executing ASP requests 0 Request(s) ASP templates cached 0 Template(s) ASP template cache size 0.00 Bytes Loaded ASP applications 1 Application(s) ASP.DLL Version 7.5.7600.16620 ASP application report ASP application metabase key Physical Path Virtual Root Session Count 0 Session(s) Request Count 0 Request(s) Session Timeout 0 minutes(s) Path to Global.asa Server side script debugging enabled False Client side script debugging enabled False Out of process COM servers allowed False Session state turned on False Write buffering turned on False Application restart enabled False Parent paths enabled False ASP Script error messages will be sent to browser False ASP!CACTIVESCRIPTENGINE::GETAPPLICATION+27In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 recent events: server was being brute forced by hackers all of Dec and probably earlier, they weren't able to gain access but did get a virus on and blasted out spam. insatlled AVG and about the 17 or 22 latest patches. after that the app pool started crashing and the server has crashed a couple times since then. I am in no mans land as I am a developer and not a sys admin but I have to assume many roles. So I'm reaching out for help. Sometimes I will see hundreds of these 'C0000005' scriptengine errors in the event log in a matter of seconds and other times just a few times an hour. I googled this line 'ASP!CACTIVESCRIPTENGINE::GETAPPLICATION' and got nothing. Its like the function don't exist or something. I have spent many hours google-ing to no avail and am now turning to the experts on the forums. Thank you for your help

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    Hi, A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • XCP Project Kronos syslog error: "irq ... : nobody cared" on Dom0 host

    - by Vlad Fedin
    One of our production clusters driven by XCP suddenly went uresponsive. After restart and some investigation we found such logs in dom0 machine syslog: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659040] irq 339: nobody cared (try booting with the "irqpoll" option) Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659058] Pid: 0, comm: swapper/3 Tainted: G C O 3.2.0-24-generic #37-Ubuntu Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659060] Call Trace: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659062] <IRQ> [<ffffffff810db37d>] __report_bad_irq+0x3d/0xe0 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659071] [<ffffffff810db605>] note_interrupt+0x135/0x190 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659074] [<ffffffff810d8e69>] handle_irq_event_percpu+0xa9/0x220 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659078] [<ffffffff8130ff3b>] ? radix_tree_lookup+0xb/0x10 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659081] [<ffffffff810d9031>] handle_irq_event+0x51/0x80 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659084] [<ffffffff810dc187>] handle_edge_irq+0x87/0x140 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659089] [<ffffffff813a8829>] __xen_evtchn_do_upcall+0x199/0x250 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659092] [<ffffffff813aa96f>] xen_evtchn_do_upcall+0x2f/0x50 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659096] [<ffffffff81666d3e>] xen_do_hypervisor_callback+0x1e/0x30 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659097] <EOI> [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659104] [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659107] [<ffffffff8100a1d0>] ? xen_safe_halt+0x10/0x20 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659110] [<fff IRQ 339 in cat /proc/interrupts: 339: ... xen-pirq-msi-x eth0 where eth0 is hardware NIC. While host machine seems to hang, guest machines continue to work, so our tiny internal monitoring on one of the virtual hosts logged something like that: [2012-10-26 20:31:51] [OK......] 200 OK : 113159149 ns [2012-10-26 20:32:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 47763284432 ns ... [2012-10-26 20:34:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 46894835070 ns [2012-10-26 20:34:57] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 16821741955 ns ... [2012-10-26 20:38:18] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 20103298289 ns [2012-10-26 20:38:37] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 17895754943 ns Host and guest OS: Ubuntu 12.04 LTS, 05:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection Subsystem: ASUSTeK Computer Inc. Device 8369 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fe500000 (32-bit, non-prefetchable) [size=128K] Region 2: I/O ports at e000 [size=32] Region 3: Memory at fe520000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: e1000e Kernel modules: e1000e Any hints how to debug this?

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • Computer freezes +/- 30 seconds, suspicion on SSD

    - by Robert vE
    My computer freezes for about 30 seconds, this happens occasionally. When it happens I can still move the mouse, sometimes even alternate between tabs in google chrome. If I try to open windows explorer nothing happens. Also chrome rapports "waiting for cache". It also happens in starcraft II, during which the sounds loops. I have made a Trace as this topic describes: How do I troubleshoot a Windows 7 freeze or slowness? Trace: https://docs.google.com/open?id=0B_VkKdh535p6NklhSDdBLURUMnc I have looked at it, but I couldn't figure it out. My system specs are: AMD Athlon X4 651 Asus Ati HD6670 ADATA SSD sp900 Asus f1a55 mainboard 4 GB crucial 1333 ram 500 watt atx ps I'm running Windows 7, fully updated. Any help is much appreciated. Update: I tried something before your reply that may have helped the problem. I don't know for sure if it has, it's too soon to tell. A bit of history first. I had problems installing win7 on my ssd from the start. In IDE mode it worked, but I had the same problems as above. AHCI was a total fail, with it on before install as well as turning it on after install (including tweaking register). I didn't bother installing the AMD chipset/AHCI as it was reported to have no TRIM function and thus make problems worse. Eventually I did install the AMD SMbus driver as the stability issues were driving me crazy. It worked, no more issues, until I installed some extra drivers and software. Audio/LAN/ASUS suite, I don’t see the relation, but somehow it screwed up my system again. As a last effort I posted here on this site. After which the thought occurred to me turn on AHCI again as by now I had all necessary drivers installed anyway. (plus all windows updates downloaded/installed in the meantime) I did and stability didn’t seem great the first few reboots, but eventually everything seemed to work great. I tried to play starcraft II – an almost guaranteed freeze before – and I had no problems. I’m basically crossing my fingers and hope the problem is gone for good. I still think it has something to do with my SSD. In my research into the problem I noticed a lot of these issues with sandforce 2281 firmware, the exact same firmware I have. People reported the same problem that I had, freezes. Additionally they reported that during a freeze the hdd light stayed on, I noticed after I read this that this happened with my computer as well. None of this is conclusive evidence that my SSD is really to fault, but it is suspicious. And why turning on AHCI would fix it I don’t know. Thank you Tom for taking a look, if the problem returns I will certainly do what you advised.

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • Set up lnux box for hosting a-z [apache mysql php ssl]

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure Virtualhosts for SSL, setting up a CA, setting up SFTP with openSSH, or anything else would be appreciated.

    Read the article

  • Getting error while installing mod_wsgi in centos

    - by user825904
    I have reinstalled python with enable shared [root@master mod_wsgi-3.4]# make clean rm -rf .libs rm -f mod_wsgi.o mod_wsgi.la mod_wsgi.lo mod_wsgi.slo mod_wsgi.loT rm -f config.log config.status rm -rf autom4te.cache [root@master mod_wsgi-3.4]# LD_RUN_PATH=/usr/local/lib make apxs -c -I/usr/local/include/python2.7 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm /usr/lib64/apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wformat-security -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.7 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1161:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:162:1: warning: this is the location of the previous definition In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1183:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:164:1: warning: this is the location of the previous definition mod_wsgi.c: In function ‘wsgi_server_group’: mod_wsgi.c:991: warning: unused variable ‘value’ mod_wsgi.c: In function ‘Log_isatty’: mod_wsgi.c:1665: warning: unused variable ‘result’ mod_wsgi.c: In function ‘Log_writelines’: mod_wsgi.c:1802: warning: unused variable ‘msg’ mod_wsgi.c: In function ‘Adapter_output’: mod_wsgi.c:3087: warning: unused variable ‘n’ mod_wsgi.c: In function ‘Adapter_file_wrapper’: mod_wsgi.c:4138: warning: unused variable ‘result’ mod_wsgi.c: In function ‘wsgi_python_term’: mod_wsgi.c:5850: warning: unused variable ‘tstate’ mod_wsgi.c:5849: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_python_child_init’: mod_wsgi.c:7050: warning: unused variable ‘l’ mod_wsgi.c:6948: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_add_import_script’: mod_wsgi.c:7701: warning: unused variable ‘error’ mod_wsgi.c: In function ‘wsgi_add_handler_script’: mod_wsgi.c:8179: warning: unused variable ‘dconfig’ mod_wsgi.c:8178: warning: unused variable ‘sconfig’ mod_wsgi.c: In function ‘wsgi_hook_handler’: mod_wsgi.c:9375: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9377: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9379: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9383: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9403: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9405: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9408: warning: suggest parentheses around assignment used as truth value mod_wsgi.c: In function ‘wsgi_daemon_worker’: mod_wsgi.c:10819: warning: unused variable ‘duration’ mod_wsgi.c:10818: warning: unused variable ‘start’ mod_wsgi.c: In function ‘wsgi_hook_daemon_handler’: mod_wsgi.c:13172: warning: unused variable ‘i’ mod_wsgi.c:13170: warning: unused variable ‘elts’ mod_wsgi.c:13169: warning: unused variable ‘head’ mod_wsgi.c: At top level: mod_wsgi.c:8142: warning: ‘wsgi_set_user_authoritative’ defined but not used mod_wsgi.c:15251: warning: ‘wsgi_hook_check_user_id’ defined but not used /usr/lib64/apr-1/build/libtool --silent --mode=link gcc -o mod_wsgi.la -rpath /usr/lib64/httpd/modules -module -avoid-version mod_wsgi.lo -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm

    Read the article

  • Does 64bit Windows 8 have the same 75% memory-usage limitation for applications as Windows 7?

    - by Barleyman
    64bit Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit? EDIT: Here is another older post here which discusses this same behavior on Windows 7. There is fixed 25% limit in Windows 7 and I'd like to know if it's still in Windows 8. Windows 7 / Page File Disabled / 12 GB RAM / 2+ GB RAM free and "your computer is running low on memory" Edit2: Here is another link discussing the low memory warning and how to disable it. Note he claims the limit for RAM usage is 80%, not 75%. It would seem to be correct as you can in fact allocate 6.4GB of RAM with 8GB machine. Anything above and beyond that goes to the pagefile, though. http://halflight.com.au/2011/04/06/how-to-disable-low-memory-warnings-and-the-advantages-of-removing-the-page-file/ Edit3: a Here's couple of process explorer screenshots that demonstrate how it goes down. Exhibit1: https://dl.dropbox.com/u/42068601/sysinfo.jpg Exhibit2: https://dl.dropbox.com/u/42068601/sysint2.jpg You can see that Windows 7 will use the memory 6.4GB as the very last resort. I have low memory warning switched off here so programs crashed at the last screenshot allocation. With low memory warning turned on, it starts nagging before you can push OS to use that remaining 1.6GB. The question is not "Is it OK windows does not want to allocate last 20% of RAM because X", it's "Does Windows 8 still behave this way". With 16GB this really becomes dumb.

    Read the article

  • Google Chrome and Firefox Rendering

    - by user13503
    When attempting to visit sites running javascript, Google Chrome and Firefox often fail to render them properly. For example, when hitting google.com/codesearch results, I just get a blank page. http://img340.imageshack.us/img340/8527/200910071518.png This rendering problem happens in both browsers on a machine running Windows XP Professional x64. Internet explorer is able to render the page just fine. Also, controls on other pages sometimes fail to respond, for example in Google Docs. Has anyone encountered this issue before in Chrome/Firefox? Do they share a common installation of the V8 javascript engine? I'm baffled by this issue. Here's the source code for the image above that Chrome loads when attempting to retrieve the page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><script>function a(c){this.t={};this.tick=function(d,e,b){var f=b?b:(new Date).getTime();this.t[d]=[f,e]};this.tick("start",null,c)}var g=new a;window.jstiming={Timer:a,load:g};try{window.jstiming.pt=window.external.pageT}catch(h){}; </script><title></title><script language="javascript" src="/codesearch/js/CachedFile/F1A2CB189D0FCB1FF201C42BF6A5447C.cache.js"></script></head><body><iframe src="javascript:''" id="__gwt_historyFrame" style="width:0;height:0;border:0"></iframe><script>if(window.jstiming){window.jstiming.a={};window.jstiming.c=1;function k(a,d,f){var b=a.t[d];if(!b)return undefined;b=a.t[d][0];if(f!=undefined)var h=f;else h=a.t.start[0];return b-h}window.jstiming.report=function(a,d,f){var b="";if(window.jstiming.pt){b+="&srt="+window.jstiming.pt;delete window.jstiming.pt}try{if(window.external&&window.external.tran)b+="&tran="+window.external.tran}catch(h){}if(a.b)b+="&"+a.b;var e=a.t,p=e.start,l=[],i=[];for(var c in e)if(!(c=="start"))if(!(c.indexOf("_")==0)){var j= e[c][1];if(j)e[j]&&i.push(c+"."+k(a,c,e[j][0]));else p&&l.push(c+"."+k(a,c))}delete e.start;if(d)for(var m in d)b+="&"+m+"="+d[m];var n=[f?f:"http://csi.gstatic.com/csi","?v=3","&s="+(window.jstiming.sn?window.jstiming.sn:"codesearch")+"&action=",a.name,i.length?"&it="+i.join(",")+b:b,"&rt=",l.join(",")].join(""),g=new Image,o=window.jstiming.c++;window.jstiming.a[o]=g;g.onload=g.onerror=function(){delete window.jstiming.a[o]};g.src=n;g=null;return n}}; </script></body></html> Thanks, Steve

    Read the article

  • Varnish server in front of nginx server with multiple virtualhosts

    - by Garreth 00
    I have tried to search for a solution for this, but can't find any documentation/tips on my specific setup. My setup: Backendserver: ngnix: 2 different websites (2 top domains) in virtualenv, running gunicorn/python/django Backendserver hardware(VPS) 2gb ram, 8 CPU Databaseserver: postgresql - pg_bouncer Backendserver hardware (VPS) 1gb ram, 8 CPU Varnishserver: only running varnish Varnishserver hardware (VPS) 1gb ram, 8 CPU I'm trying to set up a varnish server to handle rare spike in traffic (20 000 unique req/s) The spike happens when a tv program mention one of the sites. What do I need to do, to make the varnish server cache both sites/domains on my backendserver? My /etc/varnish/default.vcl : backend django_backend { .host = "local.backendserver.com"; .port = "8080"; } My /usr/local/nginx/site-avaible/domain1.com upstream gunicorn_domain1 { server unix:/home/<USER>/.virtualenvs/<DOMAIN1>/<APP1>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain1.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80 default_server; listen 8080; client_max_body_size 4G; server_name www.domain1.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain1; break; } } } My /usr/local/nginx/site-avaible/domain2.com upstream gunicorn_domain2 { server unix:/home/<USER>/.virtualenvs/<DOMAIN2>/<APP2>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain2.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80; listen 8080; client_max_body_size 4G; server_name www.domain2.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain2; break; } } } Right now, If I try the Ip of the varnishserver I only get served domain1.com. Would everything be correct if I change the DNS of the two domain to point to the varnishserver, or is there extra setup before it would work? Question 2: Do I need a dedicated server for varnish, or could I just install varnish on my backendserver, or would the server run out of memory quick?

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • What are the most likely bottlenecks determining the performance of CamStudio screen recording?

    - by Steve314
    When doing screen recording, I can get a frame rate of maybe 15 frames per second for the full screen on my 1080p monitor using the XVID codec. I can increase the speed a bit by recording a region, changing screen modes, and tweaking other settings, but I'm curious what hardware upgrades might give me the biggest bang for my buck. My PC is budget, but modern... Athlon 2 X4 645 (3.1GHz, quad core, limited cache) processor. 4GB single channel DDR3 1066 RAM. ASRock motherboard with NVidia GeForce 7025/nForce 630a Chipset. ATI Radeon HD 5450 graphics card - 512MB on board, not configured to steal system RAM. I dual-boot Windows XP and Windows 7. For the moment, XP is my bigger performance concern as it's still my getting-things-done O/S as opposed to my browser-host O/S. My goal is to make a few programming-related tutorials. For a lot of that I don't need screen recording - I can make up some slides, record audio with the PC switched off, yada yada. When I do need screen recording, I'll mostly be recording Notepad++, Visual Studio or a command prompt. Occasionally, I may be recording some kind of graphics or diagram program and using my pre-Bamboo cheap Wacom tablet - I have the CS2 versions of Photoshop and Illustrator, but I'd much more likely be using Microsoft Paint. Basically, what I'll be recording won't be making huge demands on the machine - but recording a fair number of pixels (720p preferred) will be useful. What's particularly wierd - not so long ago I still had a five-year-old Pentium 4 based PC. And (with the same 1080p monitor) it could record at not far from the same frame rate. So clearly the performance issues are more subtle than just throw-money-at-it. My first guess would be that the main bottleneck is the bandwidth for transferring data to/from the graphics card. Is that likely to be correct? In support of that, see this [Radeon HD 5450 review][1] - the memory bandwidth is only 12.8 GB/s. If you can't get data out of graphics memory quickly, you can't transfer it back to the system memory quickly. Apparently, that's slower than some top-end cards in 2002.

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • On ESXi, guest machines hang for significant intervals compared to real machines. How can I fix this?

    - by Tarbox
    This is ESXi version 5.0.0. We plan on upgrading to 5.5 eventually. I have four code profiles, two taken on a real, unvirtualized machine, two taken on a virtual machine. Ordering the list of subroutines by time spent in each one, the two real profiles are practically identical. The two virtual profiles are different from each other and from the real profiles: a subset of subroutines are taking a lot more time on the virtual machines, and the subset is different for each run. The two virtual profiles take a similar amount of time, which is 3 times the amount of time the real profiles take. This gross "how long does it take?" result is consistent after hundreds of tests across three different virtual machines on two different host machines -- the virtual machine is just slower. I've only the code profiling on the four, however. Here's the most guilty set of lines: This is the real machine: 8µs $text = '' unless defined $text; 1.48ms foreach ( split( "\n", $text ) ) { This is the first run on the virtual machine: 20.1ms $text = '' unless defined $text; 1.49ms foreach ( split( "\n", $text ) ) { This is the second run on the virtual machine: 6µs $text = '' unless defined $text; 21.9ms foreach ( split( "\n", $text ) ) { My WAG is that the VM is swapping out the thread and then swapping it back in, destroying some level of cache in the process, but these code profiles were taken when the vm in question was the only active vm on the host, so... what? What does that mean? The guest itself is under light load, this is a latency problem for my users rather than throughput. The host is also under a light load, if I knew what resources to assign where, I could do it without worrying about the cost. I've attempted to lock memory, reserve cpu, assign a restrictive affinity, and disable hyperthread sharing. They don't help, it still takes the VM 2-4x the amount of time to do the same thing as the real machine. The host the tests were run on is 6x2.50GHz, Intel Xeon E5-26400 w/ 16gigs of ram. The guest exhibits the same performance under a wide combination of settings. The real machine is 4x2.13GHz, Xeon E5506 w/ 2 gigs of ram. Thank you for all advice.

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >