Search Results

Search found 340 results on 14 pages for 'hai minh nguyen'.

Page 4/14 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ubuntu: One or more of the mounts listed in fstab cannot ye be mounted

    - by Phuong Nguyen
    I was enjoying a Movie when my Ubuntu suddenly hung. At the next reboot, here is the message: One or more of the mounts listed in /etc/fstab cannot yet be mounted: /home: waiting for /dev/disk/by-uuid/.... Press ESC to enter a recovery shell. Problems: When I enter recovery shell, I don't know that to do. If I press Ctrl+D, then the message above will reappear. What should I do? I checked with Ubuntu Live CD and my partition looks OK.

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • Ubuntu nasty error: The panel encountered a problem while loading xxxApplet

    - by Phuong Nguyen
    I have install ubuntu & GNOME (say, with minimum possible number of packages). I can login and do anything as I want. However, there is a nasty thing: Whenever I loggin in, I see this message: Error The panel encountered a problem while loading "OAFIID:GNOME_FastUserSwitchApplet" Do you want to delete the applet from your configuration [Don't Delete] [Delete] If I press [Delete] then the error won't be shown anymore. However, for every new created account, the message shown again (use created using sudo adduser user_name). Since I clone this OS into several virtual instance, and create new account on these instances. I wonder if there is a way to configure my ubuntu so that new created user don't have to see this annoy message? Thanks

    Read the article

  • Alt-tab icon list in Gnome and metacity?

    - by Vinh Nguyen
    Can anyone provide a reference or explain how the icons to the alt-tab list is populated? I would like to specify some icons for some programs that do not have icons, e.g. xterm. I'm using Ubuntu 11.04 with Gnome 2 (Ubuntu Classic) and metacity as the window manager. I did see this thread that mentions /usr/share/pixmap/, but if I use cp gnome-terminal.xpm xterm.xpm the icon was not populated in the alt-tab icon list (even after a logout/login). I do see that the icon is populated when I added the xterm command to the Program Menu.

    Read the article

  • Ubuntu hang, and cannot be soft-reset after resuming from stand by mode

    - by Phuong Nguyen
    I have downgrade my xorg drivers, so I can hibernate and stand by my ubuntu smoothly. However, in some case, there's a problem. My ubuntu get hang. When I tried to switch to console mode (Ctrl+Alt+F1), then I cannot login. The system always reply with an error whenever I tried to press a key. When I press Ctrl+Alt+Del to perform a softreset, here what's it said: [80141.320122] end_request: I/O error, dev sda, sector 193687181 init: control-alt-delete main process (5660) terminated with status 2. This error is not even recorded in syslog. I guess this should be a problem with my hard disk since it said something about a bad sector. Exactly, what kind of error is this?

    Read the article

  • what virtual machine should dell poweredg sc 1425 install?

    - by Nguyen Khanh Huy
    I'm intalling our local server and want to install a virtual machine but it seem vmware ESXi is not suit with our server Server: Dell SC 1424 CPU : 2 Xeon 3.2G (buss 800, cache L2 2M) Ram: 6G DDR ECC 266 Hard disk: 2 Hitachi Sata 1TB. Raid Dell Cerc 2s ( raid 0, 1) Nic: 2 Broadcom 1Gb/s I'm wondering if you're familiar with this area and have any idea about a VM software for our server. Just wanted to use server for some purposes ( web hosting, subversion and to experience some server OSs) Thank you for helping.

    Read the article

  • Mysql migrate huge db from innodb to ndbcluster Err: the table is full

    - by Nguyen Trong Nhan
    I'm trying to migrate old database to mysql cluster (4 data nodes) by using command: ALTER TABLE sample ENGINE=NDBCLUSTER but I'm getting the following error: The table '#sql-7ff3_3' is full There are approximately 300 mil rows in this table. Here are my config file: /mysql-cluster/config.ini [NDBD DEFAULT] NoOfReplicas=2 DataDir=/data/mysql-cluster/ndb/ BackupDataDir=/data/mysql-cluster/backup/ DataMemory=10G IndexMemory=5G TimeBetweenLocalCheckpoints=6 FragmentLogFileSize=256MB NoOfFragmentLogFiles=50 MaxNoOfOrderedIndexes=8000 MaxNoOfConcurrentOperations=100000 MaxNoOfTables = 10000 RedoBuffer=128M MaxNoOfAttributes=5000 MaxNoOfUniqueHashIndexes=1024 /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/data/mysql-cluster/mysqld/ event_scheduler=on default-storage-engine=ndbcluster ndbcluster ndb-connectstring=192.168.x.x,192.168.x.x innodb_file_per_table innodb_buffer_pool_size = 512MB key_buffer = 512M key_buffer_size = 512M sort_buffer_size = 512M table_cache = 1024 read_buffer_size = 512M

    Read the article

  • transform a trapezium into a rectangle

    - by Phuong Nguyen
    I use my iPhone to capture a painting. However, the angle was not perfect, so instead of getting a straight rectangle, I get a trapezium. However, I want to transform this trapezium back into a rectangle (using some affine transformation). However, I cannot find a good way to do it. Please advice.

    Read the article

  • Audio switch with multiple 3.5mm input & outputs

    - by David Nguyen
    I've been searching for a device that simple allows me to pick one input and one output from multiple input/outputs. I thought this would be a called a switch but I can only find ones with one input. Is there such a device that can do this? I will be attaching various devices, i.e. multiple console sound, PC & Laptop inputs and outputting to my speakers or headphones. I'm looking for something small and simple. All inputs and outputs are 3.5mm.

    Read the article

  • Hostgator SSH returns Too many authentication failures for username

    - by Tri Nguyen
    I was trying to ssh into my Hostgator shared hosting account following this guide: http://support.hostgator.com/articles/getting-started/how-do-i-get-and-use-ssh-access However, it returns this error: Received disconnect from 96.125.167.124: 2: Too many authentication failures for tridn I tried to search around for a solution, and found this: http://www.ipreferjim.com/2011/07/hostgator-ssh-warns-too-many-authentication-failures/ I tried doing what he suggested, but encountered another error: jailshell: .ssh/authorized_keys: No such file or directory So I ssh into my server using the PubkeyAuthentication=n flag, and create a directory called .ssh and a file called authorized_keys. I then redid what was suggested in the article, which is this: cat ~/.ssh/hostgator.pub | ssh -p 2222 -o PubkeyAuthentication=no [email protected] 'cat >> .ssh/authorized_keys' (note: my ssh key is called hostgator.pub. it's dsa I verified that the authorized_keys now has the content of this key. However, it still get the same error as before: eceived disconnect from 96.125.167.124: 2: Too many authentication failures for tridn Anybody knows how I should proceed next?

    Read the article

  • SerializationException Occurring Only in Release Mode

    - by Calvin Nguyen
    Hi, I am working on an ASP.NET web app using Visual Studio 2008 and a third-party library. Things are fine in my development environment. Things are also good if the web app is deployed in Debug configuration. However, when it is deployed in Release mode, SerializationExceptions appear intermittently, breaking other functionality. In the Windows event log, the following error can be seen: "An unhandled exception occurred and the process was terminated. Application ID: DefaultDomain Process ID: 3972 Exception: System.Runtime.Serialization.SerializationException Message: Unable to find assembly 'MyThirdPartyLibrary, Version=1.234.5.67, Culture=neutral, PublicKeyToken=3d67ed1f87d44c89'. StackTrace: at System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly( ) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAsse mblyInfo assemblyInfo, String name) at System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryObjectWithMapTyped record) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryHeaderEnum binaryHeaderEnum) at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(Header Handler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Str eam serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Remoting.Channels.CrossAppDomainSerializer.DeserializeObject(Me moryStream stm) at System.AppDomain.Deserialize(Byte[] blob) at System.AppDomain.UnmarshalObject(Byte[] blob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp." Using FUSLOGVW.exe (i.e., Assembly Binding Log Viewer), I can see the problem is that IIS attempts to find MyThirdPartyLibrary in directory C:\windows\system32\inetsrv. It refuses to look in the bin folder of the web app, where the DLL is actually located. Does anyone know what the problem is? Thanks, Calvin

    Read the article

  • Google App Engine: JDO does the job, JPA does not

    - by Phuong Nguyen de ManCity fan
    I have setup a project using both Jdo and Jpa. I used Jpa Annotation to Declare my Entity. Then I setup my testCases based on LocalTestHelper (from Google App Engine Documentation). When I run the test, a call to makePersistent of Jdo:PersistenceManager is perfectly OK; a call to persist of Jpa:EntityManager raised an error: java.lang.IllegalArgumentException: Type ("org.seamoo.persistence.jpa.model.ExampleModel") is not that of an entity but needs to be for this operation at org.datanucleus.jpa.EntityManagerImpl.assertEntity(EntityManagerImpl.java:888) at org.datanucleus.jpa.EntityManagerImpl.persist(EntityManagerImpl.java:385) Caused by: org.datanucleus.exceptions.NoPersistenceInformationException: The class "org.seamoo.persistence.jpa.model.ExampleModel" is required to be persistable yet no Meta-Data/Annotations can be found for this class. Please check that the Meta-Data/annotations is defined in a valid file location. at org.datanucleus.ObjectManagerImpl.assertClassPersistable(ObjectManagerImpl.java:3894) at org.datanucleus.jpa.EntityManagerImpl.assertEntity(EntityManagerImpl.java:884) ... 27 more How can it be the case? Below is the link to the source code of the maven projects that reproduce that problem: http://seamoo.com/jpa-bug-reproduce.tar.gz Execute the maven test goal over the parent pom you will notice that 3/4 tests from org.seamoo.persistence.jdo.JdoGenericDAOImplTest passed, while all tests from org.seamoo.persistence.jpa.JpaGenericDAOImplTest failed.

    Read the article

  • Cannot install xdebug with WAMP SERVER 2.1

    - by Jimmy Nguyen
    Hi all, I use WAMP SERVER 2.1 and select PHP 5.3.3 for my system, so I select xDebug with php_xdebug-2.1.0-5.3-vc6.dll and changed name becoming php_xdebug.dll for easy way to use. Following the instructions: php.ini (in Apache folder) extension=php_xdebug.dll ... zend_extension = "C:/wamp/bin/php/php5.3.3/ext/php_xdebug.dll" xdebug.remote_enable=on xdebug.remote_handler=dbgp xdebug.remote_host=localhost xdebug.remote_port=9000 xdebug.idekey="netbeans-xdebug" However, nothing happens, there are no information related to xdebug from phpinfo. Also xdebug announce that xdebug have not installed yet (http://xdebug.org/find-binary.php). I am so worried causing too much time for configuration. I got crazy and totally gave up. Anyone have ideas to solve it, I am so appreciated what you help me. Thanks

    Read the article

  • How Moles Isolation framework is implemented?

    - by Buu Nguyen
    Moles is an isolation framework created by Microsoft. A cool feature of Moles is that it can "mock" static/non-virtual methods and sealed classes (which is not possible with frameworks like Moq). Below is the quick demonstration of what Moles can do: Assert.AreNotEqual(new DateTime(2012, 1, 1), DateTime.Now); // MDateTime is part of Moles; the below will "override" DateTime.Now's behavior MDateTime.NowGet = () => new DateTime(2012, 1, 1); Assert.AreEqual(new DateTime(2012, 1, 1), DateTime.Now); Seems like Moles is able to modify the CIL body of things like DateTime.Now at runtime. Since Moles isn't open-source, I'm curious to know which mechanism Moles uses in order to modify methods' CIL at runtime. Can anyone shed any light?

    Read the article

  • MDX - Using "iif" function in the "Where" section

    - by Duc Duy Nguyen
    Hi I'd like to know how to make that "iif" work. Basically, I need to filter the engineering "product codes" when originator is "John Smith". currentmember is not working or that iif is not working, SELECT { ( [Time].[Fiscal Hierarchy Time Calculations].[Month to Date], [Measures].[Sell - Bookings] ) } ON COLUMNS, [Originators].[Originator One Letter Name].Children ON ROWS FROM [Sales] WHERE ( [Time].[Fiscal Month].&[2010-02-01T00:00:00], IIF ( [Originators].[Originator One Letter Name].CurrentMember = "John Smith", Except ( [Product Codes].[Product Primary Subcategory].Children, [Product Codes].[Product Primary Subcategory].&[ENGINEERING] ), [Product Codes].[Product Primary Subcategory].Children ) ); Any ideas? Thanks in advance. Duy

    Read the article

  • ASP.NET Validator Controls Slowing Down Page

    - by Calvin Nguyen
    Hi all, I have an UpdatePanel that has user controls dynamically added to it. There can be a few dozen user controls at times. The page / UpdatePanel slows down big time on each postback as more user controls are added. After some digging, I was surprised to find the cause is the various CompareValidator, CustomValidator, RegularExpressionValidator and RequiredFieldValidator controls that exist on each user control. Dose anyone have suggestions? It strikes me as very peculiar that inclusion of these ASP.NET controls could have such a horrible effect on performance. Thanks, Calvin

    Read the article

  • Google App Engine: Unit testing concurrent access to memcache

    - by Phuong Nguyen de ManCity fan
    Would you guys show me a way to simulating concurrent access to memcache on Google App Engine? I'm trying with LocalServiceTestHelpers and threads but don't have any luck. Every time I try to access Memcache within a thread, then I get this error: ApiProxy$CallNotFoundException: The API package 'memcache' or call 'Increment()' was not found I guess that the testing library of GAE SDK tried to mimic the real environment and thus setup the environment for only one thread (the thread that running the test) which cannot be seen by other thread. Here is a piece of code that can reproduce the problem package org.seamoo.cache.memcacheImpl; import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceFactory; import com.google.appengine.tools.development.testing.LocalMemcacheServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemcacheTest { LocalServiceTestHelper helper; public MemcacheTest() { LocalMemcacheServiceTestConfig memcacheConfig = new LocalMemcacheServiceTestConfig(); helper = new LocalServiceTestHelper(memcacheConfig); } /** * */ @BeforeMethod public void setUp() { helper.setUp(); } /** * @see LocalServiceTest#tearDown() */ @AfterMethod public void tearDown() { helper.tearDown(); } @Test public void memcacheConcurrentAccess() throws InterruptedException { final MemcacheService service = MemcacheServiceFactory.getMemcacheService(); Runnable runner = new Runnable() { @Override public void run() { // TODO Auto-generated method stub service.increment("test-key", 1L, 1L); try { Thread.sleep(200L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } service.increment("test-key", 1L, 1L); } }; Thread t1 = new Thread(runner); Thread t2 = new Thread(runner); t1.start(); t2.start(); while (t1.isAlive()) { Thread.sleep(100L); } Assert.assertEquals((Long) (service.get("test-key")), new Long(4L)); } }

    Read the article

  • Excel Macro to concatenate

    - by Harish
    Need help in creating an Excel Macro.I have an Excel sheet.The Excel sheet is not consistent. I am planning to make it uniform and structured. Eg. A B C D 1 test tester tester 2 hai test 3 Bye test tested 4 GN test tested Fine A B C D 1 test testertester 2 hai test 3 Bye testtested 4 GN testtestedFine Basically I have to find the last cell where element is placed so based on that I can write my CONCATENATE funciton. In this case it would be Column D and hence my concatenate function would have been =CONCATENATE(B1,C1,D1) Again I would like the result to be in B1 but not a problem if I have to hide. Can anyone help me in doing this?

    Read the article

  • Which information (files) of an eclipse-workspace should be tracked by source control

    - by Phuong Nguyen de ManCity fan
    I want to track the workspace of eclipse by source control so that important settings can be backed up. However, there are a lot of kind of *.index inside the .metadata folder of workspace. Some information are important, for example Mylyn repository, but some information is merely cached files and thus, doesn't make sense to me for being tracked. In short, what files inside eclipse workspace that should be tracked so that I can restore the working workspace after problems (like meta data file deleted, etc.)

    Read the article

  • JPA - Performance with using multiple entity manager

    - by Nguyen Tuan Linh
    My situation is: The code is not mine I have two kinds of database: one is Dad, one is Son. In Dad, I have a table to store JNDI name. I will look up Dad using JNDI, create entity manager, and retrieve this table. From these retrieved JNDI names, I will create multiple entity managers using multiple Son databases. The problem is: Son have thousands of entities. It takes each Son database around 10 minutes to load all entities. If there is 4 Son databases, it will be 40 minutes. My question: Is there any way to load all entities and use them for all entity manager? Please look at the code below For each Son JNDI: Map<String, String> puSonProperties = new HashMap<String, String>(); puSonProperties.put("javax.persistence.jtaDataSource", sonJndi); EntityManagerFactory emf = Persistence.createEntityManagerFactory("PUSon", puSonProperties); PUSon - All of them use the same persistence unit log.info("Verify entity manager for son: {0} - {1}", sonCode, emSon.find(Son_configuration.class, 0) != null ? "ok" : "failed!"); This is the actual code where the loading of all entities begins. 10 mins.

    Read the article

  • Unbelievable: Cannot cast from class X to its super class

    - by Phuong Nguyen de ManCity fan
    I'm encountering a very weird problem with Spring (3.0.1.RELEASE), TestNG (5.11) and Maven Surefire (2.5). I have a test class that extends a Spring helper class for testNG so that test context can be loaded from an xml file (that contains some bean definitions). My project was imported into eclipse using m2eclipse (using Import Maven Project) The class run fine in Eclipse TestNG runner. However, it throws this exception with Maven Surefire Caused by: java.lang.ClassCastException: com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl cannot be cast to javax.xml.parsers.DocumentBuilderFactory at javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:123) at org.springframework.beans.factory.xml.DefaultDocumentLoader.createDocumentBuilderFactory(DefaultDocumentLoader.java:89) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:70) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:388) I have eliminated all involved dependencies in my pom so that the two classes com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl and javax.xml.parsers.DocumentBuilderFactory are coming from JRE only (the rt.jar). So, it looks so unbelievable to me. I wonder if there is any mechanism in loading class that can explain for this behavior? Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >