Search Results

Search found 10245 results on 410 pages for 'quick guide to irm'.

Page 388/410 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Segmentation Fault (11) with modwsgi on CentOS 5.7 when running pyramid app

    - by carbotex
    I'm getting Segmentation fault error when trying to access the "Hello World" pyramid app. This error only occurs when running against CentOS 5.7 setup, but no problem whatsoever when tested against OSX and Arch Linux. Could it be a CentOS specific issue? [error] [client 10.211.55.2] Premature end of script headers: pyramid.wsgi [notice] child pid 31212 exit signal Segmentation fault (11) I have tried to follow the troubleshooting guides posted here http://code.google.com/p/modwsgi/wiki/InstallationIssues which suggests that it might caused by missing Shared Library. A quick check reveals that shared library is not the issue. [centos57@localhost modules]$ ldd mod_wsgi.so linux-gate.so.1 => (0x00e6a000) libpython2.7.so.1.0 => /home/python/lib/libpython2.7.so.1.0 (0x0024c000) libpthread.so.0 => /lib/libpthread.so.0 (0x00da8000) libdl.so.2 => /lib/libdl.so.2 (0x00cd6000) libutil.so.1 => /lib/libutil.so.1 (0x00110000) libm.so.6 => /lib/libm.so.6 (0x0085c000) libc.so.6 => /lib/libc.so.6 (0x00682000) /lib/ld-linux.so.2 (0x0012b000) Then I found another clue that might be able to solve my problem. Unfortunately libexpat is not the source of the problem. http://code.google.com/p/modwsgi/wiki/IssuesWithExpatLibrary [centos57@localhost bin]$ ldd ~/httpd/bin/httpd | grep expat libexpat.so.1 => /usr/local/lib/libexpat.so.1 (0x00b00000) [centos57@localhost bin]$ strings /usr/local/lib/libexpat.so.1 | grep expat libexpat.so.1 expat_2.0.1 [centos57@localhost bin]$ python Python 2.7.2 (default, Nov 26 2011, 08:08:44) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyexpat >>> pyexpat.version_info (2, 0, 0) >>> I've been pulling my hair out trying to figure out what I'm missing in my setup. Why the problem only occurs with CentOS? Here is the detailed setup: Apache 2.2.19 Python 2.7.2 mod_wsgi-3.3 /home/httpd/conf/extra/pyramid.wsgi from pyramid.paster import get_app application = get_app('/home/homecamera/hcadmin/root/production.ini', 'main') /home/httpd/conf/extra/modwsgi.conf LoadModule wsgi_module modules/mod_wsgi.so WSGIScriptAlias /myapp /home/root/test.wsgi <Directory /home/root> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> # Use only 1 Python sub-interpreter. Multiple sub-interpreters # play badly with C extensions. WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid user=daemon group=daemon processes=1 \ threads=4 \ python-path=/home/python/lib/python2.7/site-packages WSGIScriptAlias /hello /home/httpd/conf/extra/pyramid.wsgi <Directory /home/httpd/conf/extra> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> Again this same setup works on OSX and Arch Linux but not on CentOS 5.7. Could someone out there point me to the right direction before I ran out of my hair. ==================================================================================== When apache started with gdb, I got a couple of warnings Reading symbols from /home/httpd/bin/httpd...done. Attaching to program: /home/httpd/bin/httpd, process 1821 warning: .dynamic section for "/lib/libcrypt.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libutil.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations gdb output. After hitting refresh button, to load pyramid. (gdb) cont Continuing. warning: .dynamic section for "/usr/lib/libgssapi_krb5.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/usr/lib/libkrb5.so.3" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libresolv.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x8edbb90 (LWP 1824)] 0x0814c120 in EVP_PKEY_CTX_dup () apache_error_log [info] mod_wsgi (pid=1821): Starting process 'pyramid' with threads=1. [info] mod_wsgi (pid=1821): Initializing Python. [info] mod_wsgi (pid=1821): Attach interpreter ''. [info] mod_wsgi (pid=1821): Create interpreter 'web.domain.com:20000|/hcadmin'. [info] [client 10.211.55.2] mod_wsgi (pid=1821, process='pyramid', application='web.domain.com:20000|/hcadmin'): Loading WSGI script '/home/httpd/conf/extra/pyramid.wsgi'. [error] hello 1

    Read the article

  • Application stopped unexpectedly at launch

    - by Chris Stryker
    I've run this on a device and on the emulator. The app stops unexpectedly on both. I have not a clue what is wrong currently. It uses Google API Maps I compiled with Google Api 7. I followed this tutorial http://developer.android.com/guide/tutorials/views/hello-mapview.html (made some alterations clearly) I did use the correct API Key That the final apk is signed with This is the source(If you compile it shouldn't work as it is unsigned) This is the compiled signed apk Log 03-21 00:30:38.912: INFO/ActivityManager(54): Starting activity: Intent { act=android.intent.action.MAIN flg=0x10000000 cmp=com.chris.stryker.worldly/.com.poppoob.WorldlyMap } 03-21 00:30:39.173: INFO/ActivityManager(54): Start proc com.chris.stryker.worldly for activity com.chris.stryker.worldly/.com.poppoob.WorldlyMap: pid=287 uid=10031 gids={3003, 1015} 03-21 00:30:39.532: DEBUG/ddm-heap(287): Got feature list request 03-21 00:30:40.185: WARN/dalvikvm(287): Unable to resolve superclass of Lcom/chris/stryker/worldly/com/poppoob/WorldlyMap; (17) 03-21 00:30:40.193: WARN/dalvikvm(287): Link of class 'Lcom/chris/stryker/worldly/com/poppoob/WorldlyMap;' failed 03-21 00:30:40.205: DEBUG/AndroidRuntime(287): Shutting down VM 03-21 00:30:40.223: WARN/dalvikvm(287): threadid=3: thread exiting with uncaught exception (group=0x4001b188) 03-21 00:30:40.223: ERROR/AndroidRuntime(287): Uncaught handler: thread main exiting due to uncaught exception 03-21 00:30:40.252: ERROR/AndroidRuntime(287): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.chris.stryker.worldly/com.chris.stryker.worldly.com.poppoob.WorldlyMap}: java.lang.ClassNotFoundException: com.chris.stryker.worldly.com.poppoob.WorldlyMap in loader dalvik.system.PathClassLoader@45a13938 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2417) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.os.Handler.dispatchMessage(Handler.java:99) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.os.Looper.loop(Looper.java:123) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.main(ActivityThread.java:4363) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.reflect.Method.invokeNative(Native Method) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.reflect.Method.invoke(Method.java:521) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at dalvik.system.NativeStart.main(Native Method) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): Caused by: java.lang.ClassNotFoundException: com.chris.stryker.worldly.com.poppoob.WorldlyMap in loader dalvik.system.PathClassLoader@45a13938 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:243) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.ClassLoader.loadClass(ClassLoader.java:573) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.ClassLoader.loadClass(ClassLoader.java:532) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.Instrumentation.newActivity(Instrumentation.java:1021) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2409) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): ... 11 more 03-21 00:30:40.300: INFO/Process(54): Sending signal. PID: 287 SIG: 3 03-21 00:30:40.312: INFO/dalvikvm(287): threadid=7: reacting to signal 3 03-21 00:30:40.396: INFO/dalvikvm(287): Wrote stack trace to '/data/anr/traces.txt' 03-21 00:30:49.002: WARN/ActivityManager(54): Launch timeout has expired, giving up wake lock! 03-21 00:30:49.685: WARN/ActivityManager(54): Activity idle timeout for HistoryRecord{458ab6d0 com.chris.stryker.worldly/.com.poppoob.WorldlyMap}

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Using ASP.NET MVC 2 with Ninject 2 from scratch

    - by Rune Jacobsen
    I just did File - New Project last night on a new project. Ah, the smell of green fields. I am using the just released ASP.NET MVC 2 (i.e. no preview or release candidate, the real thing), and thought I'd get off to a good start using Ninject 2 (also released version) with the MVC extensions. I downloaded the MVC extensions project, opened it in VS2008Sp1, built it in release mode, and then went into the mvc2\build\release folder and copied Ninject.dll and Ninject.Web.Mvc.dll from there to the Libraries folder on my project (so that I can lug them around in source control and always have the right version everywhere). I didn't include the corresponding .xml files - should I? Do they just provide intellisense, or some other function? Not a big deal I believe. Anyhoo, I followed the most up-to-date advice I could find; I referenced the DLLs in my MVC2 project, then went to work on Global.asax.cs. First I made it inherit from NinjectHttpApplication. I removed the Application_Start() method, and overrode OnApplicationStarted() instead. Here is that method: protected override void OnApplicationStarted() { base.OnApplicationStarted(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); // RegisterAllControllersIn(Assembly.GetExecutingAssembly()); } And I also followed the advice of VS and implemented the CreateKernel method: protected override Ninject.IKernel CreateKernel() { // RegisterAllControllersIn(Assembly.GetExecutingAssembly()); return new StandardKernel(); } That is all. No other modifications to the project. You'll notice that the RegisterAllControllersIn() method is commented out in two places above. I've figured I can run it in three different combinations, all with their funky side effects; Running it like above. I am then presented with the standard "Welcome to ASP.NET MVC" page in all its' glory. However, after this page is displayed correctly in the browser, VS shows me an exception that was thrown. It throws in NinjectControllerFactory.GetControllerInstance(), which was called with a NULL value in the controllerType parameter. Notice that this happens after the /Home page is rendered - I have no idea why it is called again, and by using breakpoints I've already determined that GetControllerInstance() has been successfully called for the HomeController. Why this new call with controllerType as null? I really have no idea. Pressing F5 at this time takes me back to the browser, no complaints there. Uncommenting the RegisterAllControllersIn() method in CreateKernel() This is where stuff is really starting to get funky. Now I get a 404 error. Some times I have also gotten an ArgumentNullException on the RegisterAllControllersIn() line, but that is pretty rare, and I have not been able to reproduce it. Uncommenting the RegisterAllControllers() method in OnApplicationStarted() (And putting the comment back on the one in CreateKernel()) Results in behavior that seems exactly like that in point 1. So to keep from going on forever - is there an exact step-by-step guide on how to set up an MVC 2 project with Ninject 2 (both non-beta release versions) to get the controllers provided by Ninject? Of course I will then start providing some actual stuff for injection (like ISession objects and repositories, loggers etc), but I thought I'd get this working first. Any help will be highly appreciated! (Also posted to the Ninject Google Group)

    Read the article

  • What would cause ANY .NET application to crash immediately... except a project I create and Debug in

    - by blak3r
    My software recently got deployed to a customer who said that the application was crashing immediately after it started. After some initial debugging, the customer provided me remote access to one of the computers which was unable to run the application. I found that the crash wasn't specific to my application. Any application which depended on the .NET framework crashed immediately. Conveniently, Visual Studio 2008 was installed so I created a quick hello world application on it and clicked Debug. The application worked fine. But, then when I tried to execute the generated binaries in the /bin/Debug/HelloWorld.exe directory outside of visual studio it crashed. List of things i've tried (UPDATED): I checked that "Everyone" has Read&Execute permissions for c:\Windows. To test that the problem was with the .NET Framework (and not my application), I attempted to download Paint .NET on to the computers. The setup frontend crashed in the same manner. Performed a repair of the .NET framework as outlined in http://support.microsoft.com/kb/908077 (Boy was this fun and time consuming). No luck. Installed .NET 3.5 SP1 (before it just had .NET 3.5) Note: my application targets 2.0 so I did this more as a long shot... but i learned in the process that .NET 3.5 SP1 also updates the underlying frameworks. Ran Aaron Stebner's .NET Setup Verification Tool. This tool indicated that .NET was successfully installed. (I forget if i checked all the versions but at least 2.0 worked). Tested some mini hello world applications which were targeted for .NET 2.0 and .NET 3.5 and both crashed in the same way. Tried launching .NET apps via windbg cmd line. Doing this did allow me invoke my simple hello world applications. So, simple .NET hello world works when invoked by windbg or by launching via debug in visual studio... but doesn't if i try to execute it standalone. I created a dump file using WinDbg. It wasn't all that revealing to me. FAULTING_IP: mscorwks!PEImage::GetEntryPointToken+21 79f4ff9d f6401010 test byte ptr [eax+10h],10h EXCEPTION_RECORD: 0012f710 -- (.exr 0x12f710) ExceptionAddress: 79f4ff9d (mscorwks!PEImage::GetEntryPointToken+0x00000021) ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 00000000 Parameter[1]: 00000010 Attempt to read from address 00000010 FAULTING_THREAD: 00000b44 PROCESS_NAME: MyProcess.exe ERROR_CODE: (NTSTATUS) 0x80000003 - {EXCEPTION} Breakpoint A breakpoint has been reached. EXCEPTION_CODE: (HRESULT) 0x80000003 (2147483651) - One or more arguments are invalid DETOURED_IMAGE: 1 NTGLOBALFLAG: 0 APPLICATION_VERIFIER_FLAGS: 0 MANAGED_STACK: !dumpstack -EE OS Thread Id: 0xb44 (0) Current frame: ChildEBP RetAddr Caller,Callee EXCEPTION_OBJECT: !pe cb10b4 Exception object: 00cb10b4 Exception type: System.ExecutionEngineException Message: <none> InnerException: <none> StackTrace (generated): <none> StackTraceString: <none> HResult: 80131506 MANAGED_OBJECT_NAME: System.ExecutionEngineException CONTEXT: 0012f72c -- (.cxr 0x12f72c) eax=00000000 ebx=00000000 ecx=00000000 edx=0000000e esi=001a1490 edi=00000001 eip=79f4ff9d esp=0012f9f8 ebp=0012fa1c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 mscorwks!PEImage::GetEntryPointToken+0x21: 79f4ff9d f6401010 test byte ptr [eax+10h],10h ds:0023:00000010=?? Resetting default scope READ_ADDRESS: 00000010 FOLLOWUP_IP: mscorwks!PEImage::GetEntryPointToken+21 79f4ff9d f6401010 test byte ptr [eax+10h],10h BUGCHECK_STR: APPLICATION_FAULT_NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN PRIMARY_PROBLEM_CLASS: NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN DEFAULT_BUCKET_ID: NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN LAST_CONTROL_TRANSFER: from 79ef02b5 to 79f4ff9d STACK_TEXT: 79f4ff9d mscorwks!PEImage::GetEntryPointToken+0x21 79ef02b5 mscorwks!PEFile::GetEntryPointToken+0xa0 79eefeaf mscorwks!SystemDomain::ExecuteMainMethod+0xd4 79fb9793 mscorwks!ExecuteEXE+0x59 79fb96df mscorwks!_CorExeMain+0x15c 7900b1b3 mscoree!_CorExeMain+0x2c 7c817077 kernel32!BaseProcessStart+0x23 SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: mscorwks!PEImage::GetEntryPointToken+21 FOLLOWUP_NAME: MachineOwner MODULE_NAME: mscorwks IMAGE_NAME: mscorwks.dll DEBUG_FLR_IMAGE_TIMESTAMP: 471ef729 STACK_COMMAND: .cxr 0012F72C ; kb ; dds 12f9f8 ; kb FAILURE_BUCKET_ID: NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN_80000003_mscorwks.dll!PEImage::GetEntryPointToken BUCKET_ID: APPLICATION_FAULT_NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN_DETOURED_mscorwks!PEImage::GetEntryPointToken+21 WATSON_STAGEONE_URL: http://watson.microsoft.com/StageOne/MyProcess_exe/2_4_4_39/4a8a192c/unknown/0_0_0_0/bbbbbbb4/80000003/00000000.htm?Retriage=1 Followup: MachineOwner Edit 1:The event log details for this error say it's a .NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (7A097706)(80131506). Edit 2 (10-7-09): This issue is still active. Edit 3 (3-29-10): This update is to let everyone know that I never did successfully solve the problem. The customer who's machine this was on lost interest in solving it and just reimaged the machine :(. Thanks for all the contributions though.

    Read the article

  • jQuery DatePicker - 'fake' click on page load

    - by Danny
    Hey! I've got a quick question about the jQuery UI DatePicker. When I load the page, defaultDate: 0 will work fine with selecting the current day's date. I would like to create a 'fake' click on the date so it will execute my JavaScript function and retrieve information from the database. I tried calling the function when the page loads but that doesn't work. $(document).ready(function(){ $("#datepicker").datepicker({ gotoCurrent: false, onSelect: function(date, inst) { ajaxFunction(date); }, dateFormat: 'dd-mm-yy', defaultDate: 0, changeMonth: true, changeYear: true }); }); //Browser Support Code function ajaxFunction(date){ var ajaxRequest; // The variable that makes Ajax possible! try{ // Opera 8.0+, Firefox, Safari ajaxRequest = new XMLHttpRequest(); } catch (e){ // Internet Explorer Browsers try{ ajaxRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try{ ajaxRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e){ // Something went wrong alert("Your browser broke!"); return false; } } } // Create a function that will receive data sent from the server ajaxRequest.onreadystatechange = function(){ if(ajaxRequest.readyState == 4){ var ajaxDisplay = document.getElementById('ajaxDiv'); ajaxDisplay.innerHTML = ajaxRequest.responseText; } } var queryString = "?date=" + date; ajaxRequest.open("GET", "getDiary.php" + queryString, true); ajaxRequest.send(null); } function ajaxAdd(){ var ajaxRequest; // The variable that makes Ajax possible! try{ // Opera 8.0+, Firefox, Safari ajaxRequest = new XMLHttpRequest(); } catch (e){ // Internet Explorer Browsers try{ ajaxRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try{ ajaxRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e){ // Something went wrong alert("Your browser broke!"); return false; } } } var day1 = $("#datepicker").datepicker('getDate').getDate(); var day2 = (day1 < 10) ? '0' + day1 : day1; var month1 = $("#datepicker").datepicker('getDate').getMonth() + 1; var month2 = (month1 < 10) ? '0' + month1 : month1; var year1 = $("#datepicker").datepicker('getDate').getFullYear(); var year2 = (year1 < 1000) ? year1 + 1900 : year1; var fullDate = day2 + "-" + month2 + "-" + year2; var queryString = "?breakfast=" + diary1.breakfast.value; queryString = queryString + "&lunch=" + diary1.lunch.value; queryString = queryString + "&dinner=" + diary1.dinner.value; queryString = queryString + "&date=" + fullDate; ajaxRequest.open("GET", "addDiary.php" + queryString, true); ajaxRequest.send(null); alert("Added value to database!"); diary1.breakfast.value = ""; diary1.lunch.value = ""; diary1.dinner.value = ""; ajaxFunction(fullDate); } I have pasted my DatePicker class, and the two functions that are used (one to retrieve information from the database, and one to store). Basically I want to mirror the onSelect: function on the DatePicker, but when the page first loads. Thanks!

    Read the article

  • Lots of mysql Sleep processes

    - by user259284
    Hello, I am still having trouble with my mysql server. It seems that since i optimize it, the tables were growing and now sometimes is very slow again. I have no idea of how to optimize more. mySQL server has 48GB of RAM and mysqld is using about 8, most of the tables are innoDB. Site has about 2000 users online. I also run explain on every query and every one of them is indexed. mySQL processes: http://www.pik.ba/mysqlStanje.php my.cnf: # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 10.100.27.30 # # * Fine Tuning # key_buffer = 64M key_buffer_size = 512M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 1000 table_cache = 1000 join_buffer_size = 2M tmp_table_size = 2G max_heap_table_size = 2G innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 128M innodb_log_file_size = 100M log-slow-queries = /var/log/mysql/slow.log sort_buffer_size = 5M net_buffer_length = 5M read_buffer_size = 2M read_rnd_buffer_size = 12M thread_concurrency = 10 ft_min_word_len = 3 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 512M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

  • Where to Store the Protection Trial Info for Software Protection Purpose

    - by Peter Lee
    It might be duplicate with other questions, but I swear that I googled a lot and search at StackOverflow.com a lot, and I cannot find the answer to my question: In a C#.Net application, where to store the protection trial info, such as Expiration Date, Number of Used Times? I understand that, all kinds of Software Protection strategies can be cracked by a sophiscated hacker (because they can almost always get around the expiration checking step). But what I'm now going to do is just to protect it in a reasonable manner that a "common"/"advanced" user cannot screw it up. OK, in order to proof that I have googled and searched a lot at StackOverflow.com, I'm listing all the possible strategies I got: 1. Registry Entry First, some users might not have the access to even read the Registry table. Second, if we put the Protection Trial Info in a Registry Entry, the user can always find it out where it is by comparing the differences before and after the software installation. They can just simply change it. OK, you might say that we should encrypt the Protection Trial Info, yes we can do that. But what if the user just change their system date before installing? OK, you might say that we should also put a last-used date, if something is wrong, the last-used date could work as a protection guide. But what if the user just uninstall the software and delete all Registry Entries related to this software, and then reinstall the software? I have no idea on how to deal with this. Please help. A Plain File First, there are some places to put the plain file: 2.a) a simple XML file under software installation path 2.b) configuration file Again, the user can just uninstall the software and remove these plain file(s), and reinstall the software. - The Software Itself If we put the protection trial info (Expiration Date, we cannot put Number of Used Times) in the software itself, it is still susceptible to the cases I mentioned above. Furthermore, it's not even cool to do so. - A Trial Product-Key It works like a licensing process, that is, we put the Trial info into an RSA-signed string. However, it requires too many steps for a user to have a try of using the software (they might lose patience): 4.a) The user downloads the software; 4.b) The user sends an email to request a Trial Product-Key by providing user name (or email) or hardware info; 4.c) The server receives the request, RSA-signs it and send back to the user; 4.d) The user can now use it under the condition of (Expiration Date & Number of Used Times). Now, the server has a record of the user's username or hardware info, so the user will be rejected to request a second trial. Is it legal to collection hardware info? In a word, the user has to do one more extra step (request a Trial Product Key) just for having a try of using the software, which is not cool (thinking myself as a user). NOTE: This question is not about the Licensing, instead, it's about where to store the TRIAL info. After the trial expires, the user should ask for a license (CD-Key/Product-Key). I'm going to use RSA signature (bound to User Hardware)

    Read the article

  • Running multiple image manipulations in parallel causing OutOfMemory exception

    - by Tom
    I am working on a site where I need to be able to split and image around 4000x6000 into 4 parts (amongst many other tasks) and I need this to be as quick as possible for multiple users. My current code for doing this is var bitmaps = new RenderTargetBitmap[elements.Length]; using (var stream = blobService.Stream(key)) { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.StreamSource = stream; bi.EndInit(); for (var i = 0; i < elements.Length; i++) { var element = elements[i]; TransformGroup transformGroup = new TransformGroup(); TranslateTransform translateTransform = new TranslateTransform(); translateTransform.X = -element.Left; translateTransform.Y = -element.Top; transformGroup.Children.Add(translateTransform); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.PushTransform(transformGroup); cont.DrawImage(bi, new Rect(new Size(bi.PixelWidth, bi.PixelHeight))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); bitmaps[i] = rtb; } } for (var i = 0; i < bitmaps.Length; i++) { using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmaps[i])); encoder.Save(ms); var regionKey = WebPath.Variant(key, elements[i].Id); saveBlobService.Save("image/png", regionKey, ms); } } I am running multiple threads which take jobs off a queue. I am finding that if this part of code is hit by 4 threads at once I get an OutOfMemory exception. I can stop this happening by wrapping all the code above in a lock(obj) but this isn't ideal. I have tried wrapping just the first using block (where the file is read from disk and split) but I still get the out of memory exceptions (this part of the code executes quite quickly). I this normal considering the amount of memory this should be taking up? Are there any optimisations I could make? Can I increase the memory available? UPDATE: My new code as per Moozhe's help public static void GenerateRegions(this IBlobService blobService, string key, Element[] elements) { using (var stream = blobService.Stream(key)) { foreach (var element in elements) { stream.Position = 0; BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.SourceRect = new Int32Rect(element.Left, element.Top, element.Width, element.Height); bi.StreamSource = stream; bi.EndInit(); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.DrawImage(bi, new Rect(new Size(element.Width, element.Height))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(rtb)); encoder.Save(ms); var regionKey = WebPath.Variant(key, element.Id); blobService.Save("image/png", regionKey, ms); } } } }

    Read the article

  • Memory Troubles with UIImagePicker

    - by Dan Ray
    I'm building an app that has several different sections to it, all of which are pretty image-heavy. It ties in with my client's website and they're a "high-design" type outfit. One piece of the app is images uploaded from the camera or the library, and a tableview that shows a grid of thumbnails. Pretty reliably, when I'm dealing with the camera version of UIImagePickerControl, I get hit for low memory. If I bounce around that part of the app for a while, I occasionally and non-repeatably crash with "status:10 (SIGBUS)" in the debugger. On low memory warning, my root view controller for that aspect of the app goes to my data management singleton, cruises through the arrays of cached data, and kills the biggest piece, the image associated with each entry. Thusly: - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Low Memory Warning" message:@"Cleaning out events data" delegate:nil cancelButtonTitle:@"All right then." otherButtonTitles:nil]; [alert show]; [alert release]; NSInteger spaceSaved; DataManager *data = [DataManager sharedDataManager]; for (Event *event in data.eventList) { spaceSaved += [(NSData *)UIImagePNGRepresentation(event.image) length]; event.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(event.image) length]; } NSString *titleString = [NSString stringWithFormat:@"Saved %d on event images", spaceSaved]; for (WondrMark *mark in data.wondrMarks) { spaceSaved += [(NSData *)UIImagePNGRepresentation(mark.image) length]; mark.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(mark.image) length]; } NSString *messageString = [NSString stringWithFormat:@"And total %d on event and mark images", spaceSaved]; NSLog(@"%@ - %@", titleString, messageString); // Relinquish ownership any cached data, images, etc that aren't in use. } As you can see, I'm making a (poor) attempt to eyeball the memory space I'm freeing up. I know it's not telling me about the actual memory footprint of the UIImages themselves, but it gives me SOME numbers at least, so I can see that SOMETHING'S happening. (Sorry for the hamfisted way I build that NSLog message too--I was going to fire another UIAlertView, but realized it'd be more useful to log it.) Pretty reliably, after toodling around in the image portion of the app for a while, I'll pull up the camera interface and get the low memory UIAlertView like three or four times in quick succession. Here's the NSLog output from the last time I saw it: 2010-05-27 08:55:02.659 EverWondr[7974:207] Saved 109591 on event images - And total 1419756 on event and mark images wait_fences: failed to receive reply: 10004003 2010-05-27 08:55:08.759 EverWondr[7974:207] Saved 4 on event images - And total 392695 on event and mark images 2010-05-27 08:55:14.865 EverWondr[7974:207] Saved 4 on event images - And total 873419 on event and mark images 2010-05-27 08:55:14.969 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images 2010-05-27 08:55:15.064 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images And then pretty soon after that we get our SIGBUS exit. So that's the situation. Now my specific questions: THE time I see this happening is when the UIPickerView's camera iris shuts. I click the button to take the picture, it does the "click" animation, and Instruments shows my memory footprint going from about 10mb to about 25mb, and sitting there until the image is delivered to my UIViewController, where usage drops back to 10 or 11mb again. If we make it through that without a memory warning, we're golden, but most likely we don't. Anything I can do to make that not be so expensive? Second, I have NSZombies enabled. Am I understanding correctly that that's actually preventing memory from being freed? Am I subjecting my app to an unfair test environment? Third, is there some way to programmatically get my memory usage? Or at least the usage for a UIImage object? I've scoured the docs and don't see anything about that.

    Read the article

  • codes to convert from avi to asf

    - by George2
    Hello everyone, No matter what library/SDK to use, I want to convert from avi to asf very quickly (I could even sacrifice some quality of video and audio). I am working on Windows platform (Vista and 2008 Server), better .Net SDK/code, C++ code is also fine. :-) I learned from the below link, that there could be a very quick way to convert from avi to asf to support streaming better, as mentioned "could convert the video from AVI to ASF format using a simple copy (i.e. the content is the same, but container changes).". My question is after some hours of study and trial various SDK/tools, as a newbie, I do not know how to begin with so I am asking for reference sample code to do this task. :-) (as this is a different issue, we decide to start a new topic. :-) ) http://stackoverflow.com/questions/743220/streaming-avi-file-issue thanks in advance, George EDIT 1: I have tried to get the binary of ffmpeg from, http://ffmpeg.arrozcru.org/autobuilds/ffmpeg-latest-mingw32-static.tar.bz2 then run the following command, C:\software\ffmpeg-latest-mingw32-static\bin>ffmpeg.exe -i test.avi -acodec copy -vcodec copy test.asf FFmpeg version SVN-r18506, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-pthreads --enable-libvorbis --enable-libmp3lame --enable-libo penjpeg --enable-libtheora --enable-libspeex --enable-libxvid --enable-libfaad - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.25. 0 / 52.25. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 14 2009 04:04:47, gcc: 4.2.4 Input #0, avi, from 'test.avi': Duration: 00:00:44.86, start: 0.000000, bitrate: 5291 kb/s Stream #0.0: Video: msvideo1, rgb555le, 1280x1024, 5 tbr, 5 tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Output #0, asf, to 'test.asf': Stream #0.0: Video: CRAM / 0x4D415243, rgb555le, 1280x1024, q=2-31, 1k tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 224 fps=222 q=-1.0 Lsize= 29426kB time=44.80 bitrate=5380.7kbits/s video:26910kB audio:1932kB global headers:0kB muxing overhead 2.023317% C:\software\ffmpeg-latest-mingw32-static\bin> http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6 then have the following error when using Windows Media Player to play it, does anyone have any ideas? http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6

    Read the article

  • Should we hire someone who writes C in Perl?

    - by paxdiablo
    One of my colleagues recently interviewed some candidates for a job and one said they had very good Perl experience. Since my colleague didn't know Perl, he asked me for a critique of some code written (off-site) by that potential hire, so I had a look and told him my concerns (the main one was that it originally had no comments and it's not like we gave them enough time). However, the code works so I'm loathe to say no-go without some more input. Another concern is that this code basically looks exactly how I'd code it in C. It's been a while since I did Perl (and I didn't do a lot, I'm more a Python bod for quick scripts) but I seem to recall that it was a much more expressive language than what this guy used. I'm looking for input from real Perl coders, and suggestions for how it could be improved (and why a Perl coder should know that method of improvement). You can also wax lyrical about whether people who write one language in a totally different language should (or shouldn't be hired). I'm interested in your arguments but this question is primarily for a critique of the code. The spec was to successfully process a CSV file as follows and output the individual fields: User ID,Name , Level,Numeric ID pax, Pax Morgan ,admin,0 gt," Turner, George" rubbish,user,1 ms,"Mark \"X-Men\" Spencer","guest user",2 ab,, "user","3" The output was to be something like this (the potential hire's code actually output this): User ID,Name , Level,Numeric ID: [User ID] [Name] [Level] [Numeric ID] pax, Pax Morgan ,admin,0: [pax] [Pax Morgan] [admin] [0] gt," Turner, George " rubbish,user,1: [gt] [ Turner, George ] [user] [1] ms,"Mark \"X-Men\" Spencer","guest user",2: [ms] [Mark "X-Men" Spencer] [guest user] [2] ab,, "user","3": [ab] [] [user] [3] Here is the code they submitted: #!/usr/bin/perl # Open file. open (IN, "qq.in") || die "Cannot open qq.in"; # Process every line. while (<IN>) { chomp; $line = $_; print "$line:\n"; # Process every field in line. while ($line ne "") { # Skip spaces and start with empty field. if (substr ($line,0,1) eq " ") { $line = substr ($line,1); next; } $field = ""; $minlen = 0; # Detect quoted field or otherwise. if (substr ($line,0,1) eq "\"") { $line = substr ($line,1); $pastquote = 0; while ($line ne "") { # Special handling for quotes (\\ and \"). if (length ($line) >= 2) { if (substr ($line,0,2) eq "\\\"") { $field = $field . "\""; $line = substr ($line,2); next; } if (substr ($line,0,2) eq "\\\\") { $field = $field . "\\"; $line = substr ($line,2); next; } } # Detect closing quote. if (($pastquote == 0) && (substr ($line,0,1) eq "\"")) { $pastquote = 1; $line = substr ($line,1); $minlen = length ($field); next; } # Only worry about comma if past closing quote. if (($pastquote == 1) && (substr ($line,0,1) eq ",")) { $line = substr ($line,1); last; } $field = $field . substr ($line,0,1); $line = substr ($line,1); } } else { while ($line ne "") { if (substr ($line,0,1) eq ",") { $line = substr ($line,1); last; } if ($pastquote == 0) { $field = $field . substr ($line,0,1); } $line = substr ($line,1); } } # Strip trailing space. while ($field ne "") { if (length ($field) == $minlen) { last; } if (substr ($field,length ($field)-1,1) eq " ") { $field = substr ($field,0, length ($field)-1); next; } last; } print " [$field]\n"; } } close (IN);

    Read the article

  • Overwrite archetypes in Maven

    - by Random
    Hello again! I'm having some trouble using Maven for my archetypes and I will need to overwrite some. I launch an instruction that does an archetype:generate in an archetype already existing directory. Is there a parameter that let's me overwrite existing archetypes? I have search the maven definitve guide but it states that the only parameters accepted are: -DgroupId -DartifactId -Dversion -DpackageName -DarchetypeGroupId -DarchetypeArtifactId -DarchetypeVersion -DinteractiveMode I could just search the directory and delete the files, but this proccess is going to be done automatically (so no human involved, no brains involved) and I wouldn't like he machine deleting things around. Thanks for all! Edit: I almost forgot, here is some maven trace: [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [INFO] Archetype defined by properties [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating OldArchetype: archetype-foo-lib:1.0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: foo.tecnologia [INFO] Parameter: packageName, Value: foo.tecnologia [INFO] Parameter: basedir, Value: C:\temp\Desarrollo [INFO] Parameter: package, Value: foo.tecnologia [INFO] Parameter: version, Value: 1.0 [INFO] Parameter: artifactId, Value: Foo-Lib-Test [ERROR] Directory Foo-Lib-Test already exists - please run from a clean directory org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory at org.apache.maven.archetype.old.DefaultOldArchetype.createArchetype(DefaultOldArchetype.java:242) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.processOldArchetype(DefaultArchetypeGenerator.java:253) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:143) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:286) at org.apache.maven.archetype.DefaultArchetype.generateProjectFromArchetype(DefaultArchetype.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execute(CreateProjectFromArchetypeMojo.java:184) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at com.foo.model.CSMavenCli.main(CSMavenCli.java:391) at com.foo.model.MavenAdmin.generateArchetype(MavenAdmin.java:399) at com.foo.model.ValidarPom.validarPom(ValidarPom.java:167) at com.foo.prueba.GenerarPOM.execute(GenerarPOM.java:93) at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58) at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67) at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Unknown Source) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] : org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory Directory Foo-Lib-Test already exists - please run from a clean directory [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Fri Apr 09 10:01:33 CEST 2010 [INFO] Final Memory: 15M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • c# SendMessage and Skype woes

    - by Xcelled194
    I'm trying to create an add-on to Skype with C#. I don't want to use Skype4COM, as I'd like the experience with messages and such. Unfortunately, the messages are tripping me up. I've got the pumps and such set up. They all work, and my app successfully sends the "APIDiscover" message to Skype, gets a "PendingAuth" response and then the "AttachSuccess" message. However, when I try to send "ping" to Skype (to which it should reply "pong") nothing happens. The return code from SendMessage is 0 but Marshall.GetLastWin32Error is 1400 (Invalid handle). The handle was returned with the AttachSuccess method. The equivalent C++ code does work, so I'm at a loss. First is the C++ code I'm using as a guide: Here's the (cut down) message pump. You can ignore everything but where I put the //<---- static LRESULT APIENTRY SkypeAPITest_Windows_WindowProc( HWND hWindow, UINT uiMessage, WPARAM uiParam, LPARAM ulParam) { LRESULT lReturnCode; bool fIssueDefProc; lReturnCode=0; fIssueDefProc=false; switch(uiMessage) { case WM_COPYDATA: if( hGlobal_SkypeAPIWindowHandle==(HWND)uiParam ) { PCOPYDATASTRUCT poCopyData=(PCOPYDATASTRUCT)ulParam; printf( "Message from Skype(%u): %.*s\n", poCopyData->dwData, poCopyData->cbData, poCopyData->lpData); lReturnCode=1; } break; default: if( uiMessage==uiGlobal_MsgID_SkypeControlAPIAttach ) { switch(ulParam) { case SKYPECONTROLAPI_ATTACH_SUCCESS: printf("!!! Connected; to terminate issue #disconnect\n"); hGlobal_SkypeAPIWindowHandle=(HWND)uiParam;//<---- Right here is where we receive the handle from Skype. break; } if( fIssueDefProc ) lReturnCode=DefWindowProc( hWindow, uiMessage, uiParam, ulParam); return(lReturnCode); } and this is the (again dumbed down) "sending message" code void __cdecl Global_InputProcessingThread(void *) { static char acInputRow[1024]; bool fProcessed; if( SendMessageTimeout( HWND_BROADCAST, uiGlobal_MsgID_SkypeControlAPIDiscover, (WPARAM)hInit_MainWindowHandle, 0, SMTO_ABORTIFHUNG, 1000, NULL)!=0 ) { while(Global_Console_ReadRow( acInputRow, sizeof(acInputRow)-1)) { if( fProcessed==false && hGlobal_SkypeAPIWindowHandle!=NULL ) { COPYDATASTRUCT oCopyData; // send command to skype oCopyData.dwData=0; oCopyData.lpData=acInputRow; oCopyData.cbData=strlen(acInputRow)+1; if( oCopyData.cbData!=1 ) { if( SendMessage( hGlobal_SkypeAPIWindowHandle, WM_COPYDATA, (WPARAM)hInit_MainWindowHandle, (LPARAM)&oCopyData)==FALSE ) { hGlobal_SkypeAPIWindowHandle=NULL; printf("!!! Disconnected\n"); } } } } } SendMessage( hInit_MainWindowHandle, WM_CLOSE, 0, 0); SetEvent(hGlobal_ThreadShutdownEvent); fGlobal_ThreadRunning=false; } And now here's my C# public bool PreFilterMessage(ref Message m) { Console.WriteLine(m.ToString()); if (m.Msg == WM_COPYDATA && SkypeAPIWindowHandle == m.WParam) { SkypeMessage(m); return true; } if (m.Msg == MsgApiAttach) { switch (m.LParam.ToInt32()) { case (int)SkypeControlAPIAttach.SUCCESS: SkypeAPIWindowHandle = m.WParam; //Here's where we set the Skype Handle AttachSuccess(m); return true; } } return false; //Defer all other messages } And here is my DLL import and Sending code [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = false)] static extern IntPtr SendMessageA(IntPtr hWnd, UInt32 Msg, IntPtr wParam, ref MsgHelper.COPYDATASTRUCT lParam); public static void Command(string c) { if (c.Last() != '\0') c += "\0"; //Make string null terminated Console.WriteLine(); MsgHelper.COPYDATASTRUCT cda = new MsgHelper.COPYDATASTRUCT(); cda.dwData = new IntPtr(0); cda.lpData = c; cda.cbData = c.Length + 1; Marshal.GetLastWin32Error(); //Clear last error Console.WriteLine(SendMessageA(mHelper.SkypeAPIWindowHandle, MsgHelper.WM_COPYDATA, IntPtr.Zero, ref cda)); Console.WriteLine(Marshal.GetLastWin32Error()); } COPYDATASTRUCT is: public struct COPYDATASTRUCT { public IntPtr dwData; public int cbData; [MarshalAs(UnmanagedType.LPStr)] public string lpData; } I think that's everything. Let me know if I forgot something. Any ideas why I'm getting the 1400?

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • RAID1 rebuild fails due to disk errors

    - by overlord_tm
    Quick info: Dell R410 with 2x500GB drives in RAID1 on H700 Adapter Recently one of drives in RAID1 array on server failed, lets call it Drive 0. RAID controller marked it as fault and put it offline. I replaced faulty disk with new one (same series and manufacturer, just bigger) and configured new disk as hot spare. Rebuild from Drive1 started immediately and after 1.5 hour I got message that Drive 1 failed. Server was unresponsive (kernel panic) and required reboot. Given that half hour before this error rebuild was at about 40%, I estimated that new drive is not in sync yet and tried to reboot just with Drive 1. RAID controller complained a bit about missing RAID arrays, but it found foreign RAID array on Drive 1 and I imported it. Server booted and it runs (from degraded RAID). Here is SMART data for disks. Drive 0 (the one that failed first) ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 1 3 Spin_Up_Time POS--K 142 142 021 - 3866 4 Start_Stop_Count -O--CK 100 100 000 - 12 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 7 Seek_Error_Rate -OSR-K 200 200 000 - 0 9 Power_On_Hours -O--CK 086 086 000 - 10432 10 Spin_Retry_Count -O--CK 100 253 000 - 0 11 Calibration_Retry_Count -O--CK 100 253 000 - 0 12 Power_Cycle_Count -O--CK 100 100 000 - 11 192 Power-Off_Retract_Count -O--CK 200 200 000 - 10 193 Load_Cycle_Count -O--CK 200 200 000 - 1 194 Temperature_Celsius -O---K 112 106 000 - 31 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 197 Current_Pending_Sector -O--CK 200 200 000 - 0 198 Offline_Uncorrectable ----CK 200 200 000 - 0 199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0 200 Multi_Zone_Error_Rate ---R-- 200 198 000 - 3 And Drive 1 (the drive which was reported healthy from controller until rebuild was attempted) ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 35 3 Spin_Up_Time POS--K 143 143 021 - 3841 4 Start_Stop_Count -O--CK 100 100 000 - 12 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 7 Seek_Error_Rate -OSR-K 200 200 000 - 0 9 Power_On_Hours -O--CK 086 086 000 - 10455 10 Spin_Retry_Count -O--CK 100 253 000 - 0 11 Calibration_Retry_Count -O--CK 100 253 000 - 0 12 Power_Cycle_Count -O--CK 100 100 000 - 11 192 Power-Off_Retract_Count -O--CK 200 200 000 - 10 193 Load_Cycle_Count -O--CK 200 200 000 - 1 194 Temperature_Celsius -O---K 114 105 000 - 29 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 197 Current_Pending_Sector -O--CK 200 200 000 - 3 198 Offline_Uncorrectable ----CK 100 253 000 - 0 199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0 200 Multi_Zone_Error_Rate ---R-- 100 253 000 - 0 In extended error logs from SMART I found: Drive 0 has only one error Error 1 [0] occurred at disk power-on lifetime: 10282 hours (428 days + 10 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 18 00 00 00 6a 24 20 40 00 Error: IDNF at LBA = 0x006a2420 = 6956064 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 60 00 f8 00 00 00 6a 24 20 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 18 00 60 00 00 00 6a 24 00 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 80 00 58 00 00 00 6a 23 80 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 68 00 50 00 00 00 6a 23 18 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 10 00 10 00 00 00 6a 23 00 40 00 17d+20:25:18.104 WRITE FPDMA QUEUED But Drive 1 has 883 errors. I see only few last ones and all errors I can see look like this: Error 883 [18] occurred at disk power-on lifetime: 10454 hours (435 days + 14 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 01 -- 51 00 80 00 00 39 97 19 c2 40 00 Error: AMNF at LBA = 0x399719c2 = 966203842 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 80 00 00 00 00 39 97 19 80 40 00 1d+00:25:57.802 READ FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 40 00 1d+00:25:57.779 READ LOG EXT 60 00 80 00 00 00 00 39 97 19 80 40 00 1d+00:25:55.704 READ FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 40 00 1d+00:25:55.681 READ LOG EXT 60 00 80 00 00 00 00 39 97 19 80 40 00 1d+00:25:53.606 READ FPDMA QUEUED Given those errors, is there any way I can rebuild RAID back, or should I make backup, shutdown server, replace disks with new ones and restore it? What about if I dd faulty disk to new one from linux running on USB/CD? Also, if anyone have more experiences, what could be causes for those errors? Crappy controller or disks? Disks are about 1 year old, but it is pretty unbelievable to me that both would die within so short timespan.

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • Get User SID From Logon ID (Windows XP and Up)

    - by Dave Ruske
    I have a Windows service that needs to access registry hives under HKEY_USERS when users log on, either locally or via Terminal Server. I'm using a WMI query on win32_logonsession to receive events when users log on, and one of the properties I get from that query is a LogonId. To figure out which registry hive I need to access, now, I need the users's SID, which is used as a registry key name beneath HKEY_USERS. In most cases, I can get this by doing a RelatedObjectQuery like so (in C#): RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); where "logonID" is the logon session ID from the session query. Running the RelatedObjectQuery will generally give me a SID property that contains exactly what I need. There are two issues I have with this. First and most importantly, the RelatedObjectQuery will not return any results for a domain user that logs in with cached credentials, disconnected from the domain. Second, I'm not pleased with the performance of this RelatedObjectQuery --- it can take up to several seconds to execute. Here's a quick and dirty command line program I threw together to experiment with the queries. Rather than setting up to receive events, this just enumerates the users on the local machine: using System; using System.Collections.Generic; using System.Text; using System.Management; namespace EnumUsersTest { class Program { static void Main( string[] args ) { ManagementScope scope = new ManagementScope( "\\\\.\\root\\cimv2" ); string queryString = "select * from win32_logonsession"; // for all sessions //string queryString = "select * from win32_logonsession where logontype = 2"; // for local interactive sessions only ManagementObjectSearcher sessionQuery = new ManagementObjectSearcher( scope, new SelectQuery( queryString ) ); ManagementObjectCollection logonSessions = sessionQuery.Get(); foreach ( ManagementObject logonSession in logonSessions ) { string logonID = logonSession["LogonId"].ToString(); Console.WriteLine( "=== {0}, type {1} ===", logonID, logonSession["LogonType"].ToString() ); RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); ManagementObjectSearcher userQuery = new ManagementObjectSearcher( scope, relatedQuery ); ManagementObjectCollection users = userQuery.Get(); foreach ( ManagementObject user in users ) { PrintProperties( user.Properties ); } } Console.WriteLine( "\nDone! Press a key to exit..." ); Console.ReadKey( true ); } private static void PrintProperty( PropertyData pd ) { string value = "null"; string valueType = "n/a"; if ( null == pd.Value ) value = "null"; if ( pd.Value != null ) { value = pd.Value.ToString(); valueType = pd.Value.GetType().ToString(); } Console.WriteLine( " \"{0}\" = ({1}) \"{2}\"", pd.Name, valueType, value ); } private static void PrintProperties( PropertyDataCollection properties ) { foreach ( PropertyData pd in properties ) { PrintProperty( pd ); } } } } So... is there way to quickly and reliably obtain the user SID given the information I retrieve from WMI, or should I be looking at using something like SENS instead?

    Read the article

  • Problem installing Ubuntu 10.04 64 bit side by side with Vista by using a bootable USB drive. What n

    - by Adam Siddhi
    What happened I decided to install Ubuntu 10.04 64 bit side by side with Vista Home Premium (I guess on another partition) with a USB stick. I found instructions on how to do this here: https://help.ubuntu.com/community/Installation/FromUSBStick To create the bootable USB drive I had to download a program called Unetbootin. That process was simple enough. All I had to do was just choose the disk image option, select the ubuntu-10.04-desktop-amd64.iso image, make sure it recognizes my USB drive and then press OK. It takes only like a few minutes to create a working bootable USB drive. Then I have to restart my computer, enter the BIOS, select my USB drive as the first boot drive, save options and continue with booting up. After this Ubuntu actually loads up. I think this is known as the Live version of Ubuntu so you can try it out before fully installing it. Any ways, on the Ubuntu 10.04 desktop I saw an installer. I click it and begin the installation process. Just so you know, I tried installing it 2 times. I will explain what happened each time: The first time I tried installing Ubuntu 10.04 I got stuck at step 4 of 7. I remember selecting the last option in the window which was Specify Partitions Manually (Advanced) I made my partition for Ubuntu like 52 gigs. I clicked forward and a little pop up window appeared saying Please Wait. So the installation process stalled on this window so I closed out of it and quit the installation process. So at this point I was worried because I had already selected the partition size and assumed it started making it. Since it stalled I had to quit out though. Anyways, once again I reached step 4 of 7 a decided to select the first option which is Install them side by side choosing between them each startup. I figured this was the safe way to go. I did that and the pop up window saying Please Wait popped up again but lasted only like 10 seconds. Then I got to I guess step 6 where it asks you to enter your desired name and password. Did that and clicked forward. The Ubuntu 10.04 installation load screen appeared and the loading bar at the bottom started filling up. So I got to 83% and stalled during the Importing other profile information (I think it was called this. I had the option to do this during I think step 6) process. So at this point I decided to get stop the installation process. I was getting very nervous. I tried to restart the computer but all that happened was that Ubuntu restarted. I finally got the computer to restart. I was pretty sure I had screwed something up big time by this point. As my computer was restarting I entered BIOS again and switched back to it booting from my main hard drive containing Vista. Saved it and continued the boot process. My worst fears were confirmed as Vista would not boot up. I mean I saw the little Microsoft Windows choppy animated green loading bar at the bottom of the screen and then boom! It decided to restart. When it restarted I had the option to run a memory test check to see if there was anything that needed to be repaired. That took like 20 minutes and at the end I saw that I did indeed have to repair something. I had to go through 2 repair processes. After each I had to restart the computer. The 2nd time it went through the repair process it said that it could not fully repair the damage. I was scared and restarted but Vista did load up. I got to my desktop and saw a message saying something like Repairs have been made, Please restart for changes to take effect I noticed that some Notification icons were missing and I could not hear volume in a video. Things were a bit funky. So I did restart and here I am. Now what?! So since I got back into Vista and thankfully have a working Internet connection I am trying to find answers to my problem (that is why I am writing this post). I am scared that I have partioned my hard drive 2 times after researching Installing Ubuntu 10.04 and seeing this post http://techie-buzz.com/foss/ubuntu-10-04-lts-installation-guide.html The author shows screen shots of installing Ubuntu 10.04. He shows the image of step 4 of 7 with a caption at the bottom. I will recreate it below: Select a partitioning option. Unless you want to format all the hard drive and install Ubuntu afresh, select the last option and proceed. Questions If I have indeed partitioned my HD 2 times (which I am sure it is), how do I get to a point where I can see all my bad, unfinished Ubuntu partitions and get rid of them? How do I clean this big mess up? & How can I ensure that this mess will not happen next time I try installing Ubuntu 10.04? Thank you Adam

    Read the article

  • How do I configure OpenVPN for accessing the internet with one NIC?

    - by Lekensteyn
    I've been trying to get OpenVPN to work for three days. After reading many questions, the HOWTO, the FAQ and even parts of a guide to Linux networking, I cannot get my an Internet connection to the Internet. I'm trying to set up a OpenVPN server on a VPS, which will be used for: secure access to the Internet bypassing port restrictions (directadmin/2222 for example) an IPv6 connection (my client does only have IPv4 connectivity, while the VPS has both IPv4 and native IPv6 connectivity) (if possible) I can connect to my server and access the machine (HTTP), but Internet connectivity fails completely. I'm using ping 8.8.8.8 for testing whether my connection works or not. Using tcpdump and iptables -t nat -A POSTROUTING -j LOG, I can confirm that the packets reach my server. If I ping to 8.8.8.8 on the VPS, I get an echo-reply from 8.8.8.8 as expected. When pinging from the client, I do not get an echo-reply. The VPS has only one NIC: etho. It runs on Xen. Summary: I want to have a secure connection between my laptop and the Internet using OpenVPN. If that works, I want to have IPv6 connectivity as well. Network setup and software: Home laptop (eth0: 192.168.2.10) (tap0: 10.8.0.2) | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | router/gateway (gateway 192.168.2.1) | INTERNET | VPS (eth0:1.2.3.4) (gateway, tap0: 10.8.0.1) (running Debian 6; OpenVPN 2.1.3-2) wifi and my home router should not cause problems since all traffic goes encrypted over UDP port 1194. I've turned IP forwarding on: # echo 1 > /proc/sys/net/ipv4/ip_forward iptables has been configured to allow forwarding traffic as well: iptables -F FORWARD iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -A FORWARD -j DROP I've tried each of these rules separately without luck (flushing the chains before executing): iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j SNAT --to 1.2.3.4 iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE route -n before (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n after (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n before (client): 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 route -n after (client): 1.2.3.4 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 128.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 SERVER config proto udp dev tap ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" ifconfig-pool-persist ipp.txt keepalive 10 120 tls-auth ta.key 0 comp-lzo user nobody group nobody persist-key persist-tun log-append openvpn-log verb 3 mute 10 CLIENT config dev tap proto udp remote 1.2.3.4 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 comp-lzo verb 3 mute 20 traceroute 8.8.8.8 works as expected (similar output without OpenVPN activated): 1 10.8.0.1 (10.8.0.1) 24.276 ms 26.891 ms 29.454 ms 2 gw03.sbp.directvps.nl (178.21.112.1) 31.161 ms 31.890 ms 34.458 ms 3 ge0-v0652.cr0.nik-ams.nl.as8312.net (195.210.57.105) 35.353 ms 36.874 ms 38.403 ms 4 ge0-v3900.cr0.nik-ams.nl.as8312.net (195.210.57.53) 41.311 ms 41.561 ms 43.006 ms 5 * * * 6 209.85.248.88 (209.85.248.88) 147.061 ms 36.931 ms 28.063 ms 7 216.239.49.36 (216.239.49.36) 31.109 ms 33.292 ms 216.239.49.28 (216.239.49.28) 64.723 ms 8 209.85.255.130 (209.85.255.130) 49.350 ms 209.85.255.126 (209.85.255.126) 49.619 ms 209.85.255.122 (209.85.255.122) 52.416 ms 9 google-public-dns-a.google.com (8.8.8.8) 41.266 ms 44.054 ms 44.730 ms If you have any suggestions, please comment or answer. Thanks in advance.

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • LUA: A couple of pattern matching issues

    - by Josh
    I'm fairly new to lua programming, but I'm also a quick study. I've been working on a weather forecaster for a program that I use, and it's working well, for the most part. Here is what I have so far. (Pay no attention to the zs.stuff. That is program specific and has no bearing on the lua coding.) if not http then http = require("socket.http") end local locale = string.gsub(zs.params(1),"%s+","%%20") local page = http.request("http://www.wunderground.com/cgi-bin/findweather/getForecast?query=" .. locale .. "&wuSelect=WEATHER") local location = string.match(page,'title="([%w%s,]+) RSS"') --print("Gathering weather information for " .. location .. ".") --local windspeed = string.match(page,'<span class="nobr"><span class="b">([%d.]+)</span>&nbsp;mph</span>') --print(windspeed) local condition = string.match(page, '<td class="vaM taC"><img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') --local image = string.match(page, '<img src="http://icons-ecast.wxug.com/i/c/a/(.+).gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') local temperature = string.match(page,'pwsvariable="tempf" english="&deg;F" metric="&deg;C" value="([%d.]+)">') local humidity = string.match(page,'pwsvariable="humidity" english="" metric="" value="(%d+)"') zs.say(location) --zs.say("image ./Images/" .. image .. ".gif") zs.say("<color limegreen>Condition:</color> <color white>" .. condition .. "</color>") zs.say("<color limegreen>Temperature: </color><color white>" .. temperature .. "F</color>") zs.say("<color limegreen>Humidity: </color><color white>" .. humidity .. "%</color>") My main issue is this: I changed the 'condition' and added the 'image' variables to what they are now. Even though the line it's supposed to be matching comes directly from the webpage, it fails to match at all. So I'm wondering what it is I'm missing that's preventing this code from working. If I take out the <td class="vaM taC">< img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" it'll match condition flawlessly. (For whatever reason, I can't get the above line to display correctly, but there is no space between the `< and img) Can anyone point out what is wrong with it? Aside from the pattern matching, I assure you the line is verbatim from the webpage. Another question I had is the ability to match across line breaks. Is there any possible way to do this? The reason why I ask is because on that same page, a few of the things I need to match are broken up on separate lines, and since the actual pattern I'm wanting to match shows up in other places on the page, I need to be able to match across line breaks to get the exact pattern. I appreciate any help in this matter!

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >