Search Results

Search found 16496 results on 660 pages for 'long cheng'.

Page 54/660 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Is it possible to not trigger my normal search highlighting when using search as a movement?

    - by Nathan Long
    When I do a search in vim, I like to have my results highlighted and super-visible, so I give them a bright yellow background and a black foreground in my .vimrc. " When highlighting search terms, make sure text is contrasting colors :highlight Search guibg=yellow guifg=black (This is for GUI versions of vim, like MacVim or Gvim; for command-line, you'd use ctermbg and ctermfg.) But I sometimes use search as a movement, as in c/\foo - "change from the cursor to the next occurrence of foo." In that case, I don't want all the occurrences of foo to be highlighted. Can I turn off highlighting in cases where search is used as a movement?

    Read the article

  • Can a FreePBX backup be restored to a different version?

    - by Tim Long
    I run a small PBX based on the FreePBX distro of Asterisk. The installation has been steadily upgraded but for various reasons, we want to start again on a new server with a clean install from the distribution media. Will I be able to take a backup from the old server and restore it to the new server, even though the installs are different versions? How sensitive are FreePBX backups to the build version? Is it possible to get at least a partial restore?

    Read the article

  • need assistance with my.cnf - 1500% CPU usage

    - by Alan Long
    I'm running into a few issues with our new database server. It is a HP G8 with 2 INTEL XEON E5-2650 processors and 32GB of ram. This server is dedicated as a MySQL server (5.1.69) for our intranet portal. I have been having issues with this server staying alive - I notice high CPU usage during certain times of day (8% ~ 1500%+) and see very low memory usage (7 ~ 15%) based on using the 'top' command. When the CPU usage passes 1000%, that is when the app usually dies. I'm trying to see what I'm doing wrong with the config file, hopefully one of the experts can chime in and let me know what they think. See below for my.cnf file: [mysqld] default-storage-engine=InnoDB datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock #user=mysql large-pages # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_connections=275 tmp_table_size=1G key_buffer_size=384M key_buffer=384M thread_cache_size=1024 long_query_time=5 low_priority_updates=1 max_heap_table_size=1G myisam_sort_buffer_size=8M concurrent_insert=2 table_cache=1024 sort_buffer_size=8M read_buffer_size=5M read_rnd_buffer_size=6M join_buffer_size=16M table_definition_cache=6k open_files_limit=8k slow_query_log #skip-name-resolve # Innodb Settings innodb_buffer_pool_size=18G innodb_thread_concurrency=0 innodb_log_file_size=1G innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_lock_wait_timeout=50 innodb_file_per_table #innodb_buffer_pool_instances=4 #eliminating double buffering innodb_flush_method = O_DIRECT flush_time=86400 innodb_additional_mem_pool_size=40M #innodb_io_capacity = 5000 #innodb_read_io_threads = 64 #innodb_write_io_threads = 64 # increase until threads_created doesnt grow anymore thread_cache=1024 query_cache_type=1 query_cache_limit=4M query_cache_size=256M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 0 wait_timeout = 1800 connect_timeout = 10 interactive_timeout = 60 [mysqldump] max_allowed_packet=32M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-slow-queries=/var/log/mysql/slow-queries.log long_query_time = 1 log-queries-not-using-indexes we connect to one database with 75 tables, the largest table has 1,150,000 entries and the second largest has 128,036 entries. I have also verified that our PHP queries are optimized as best as possible. Reference - MySQLtuner: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.69-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 420M (Tables: 75) [!!] Total fragmented tables: 75 -------- Security Recommendations ------------------------------------------- [!!] User '[email protected]' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 1h 14m 50s (8M q [1K qps], 705 conn, TX: 6B, RX: 892M) [--] Reads / Writes: 68% / 32% [--] Total buffers: 19.7G global + 35.2M per thread (275 max threads) [!!] Maximum possible memory usage: 29.1G (93% of installed RAM) [OK] Slow queries: 0% (472/8M) [OK] Highest usage of available connections: 66% (183/275) [OK] Key buffer size / total MyISAM indexes: 384.0M/91.0K [OK] Key buffer hit rate: 100.0% (173 cached / 0 reads) [OK] Query cache efficiency: 96.2% (7M cached / 7M selects) [!!] Query cache prunes per day: 553614 [OK] Sorts requiring temporary tables: 0% (3 temp sorts / 1K sorts) [!!] Temporary tables created on disk: 49% (3K on disk / 7K total) [OK] Thread cache hit rate: 74% (183 created / 705 connections) [OK] Table cache hit rate: 97% (231 open / 238 opened) [OK] Open file limit used: 0% (17/8K) [OK] Table locks acquired immediately: 100% (432K immediate / 432K locks) [OK] InnoDB data size / buffer pool: 420.9M/18.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 256M) [see warning above] Thanks in advanced for your help!

    Read the article

  • Why is an Ext4 disk check so much faster than NTFS?

    - by Brendan Long
    I had a situation today where I restarted my computer and it said I needed to check the disk for consistancy. About 10 minutes later (at "1%" complete), I gave up and decided to let it run when I go home. For comparison, my home computer uses Ext4 for all of the partitions, and the disk checks (which run around once week) only take a couple seconds. I remember reading that having fast disk checks was a priority, but I don't know how they could do that. So, how does Ext4 do disk checks so fast? Is there some huge breakthrough in doing this after NTFS came out (~10 years ago)? Note: The NTFS disk is ~300 GB and the Ext4 disk is ~500 GB. Both are about half full.

    Read the article

  • Why is an Ext4 disk check so much faster than NTFS?

    - by Brendan Long
    I had a situation today where I restarted my computer and it said I needed to check the disk for consistancy. About 10 minutes later (at "1%" complete), I gave up and decided to let it run when I go home. For comparison, my home computer uses Ext4 for all of the partitions, and the disk checks (which run around once week) only take a couple seconds. I remember reading that having fast disk checks was a priority, but I don't know how they could do that. So, how does Ext4 do disk checks so fast? Is there some huge breakthrough in doing this after NTFS came out (~10 years ago)? Note: The NTFS disk is ~300 GB and the Ext4 disk is ~500 GB. Both are about half full.

    Read the article

  • Small Business Server 2011 and Remote access to documents

    - by Tim Long
    Assume I'm working away from the office; its a hotel computer Windows 7, Office 2010 and fast - so the best possible conditions. Using Companyweb - Every time I open a document, I have to go through the logon process - seems odd to have to do that. Is this a 'by design' feature or is something wrong with my configuration? When I do open the documents, are they being stored somewhere locally and should I be looking to delete on this computer - or are they in a temporary file?

    Read the article

  • Sunrise / set calculations

    - by dassouki
    I'm trying to calculate the sunset / rise times using python based on the link provided below. My results done through excel and python do not match the real values. Any ideas on what I could be doing wrong? My Excel sheet can be found under .. http://transpotools.com/sun_time.xls # Created on 2010-03-28 # @author: dassouki # @source: [http://williams.best.vwh.net/sunrise_sunset_algorithm.htm][2] # @summary: this is based on the Nautical Almanac Office, United States Naval # Observatory. import math, sys class TimeOfDay(object): def calculate_time(self, in_day, in_month, in_year, lat, long, is_rise, utc_time_zone): # is_rise is a bool when it's true it indicates rise, # and if it's false it indicates setting time #set Zenith zenith = 96 # offical = 90 degrees 50' # civil = 96 degrees # nautical = 102 degrees # astronomical = 108 degrees #1- calculate the day of year n1 = math.floor( 275 * in_month / 9 ) n2 = math.floor( ( in_month + 9 ) / 12 ) n3 = ( 1 + math.floor( in_year - 4 * math.floor( in_year / 4 ) + 2 ) / 3 ) new_day = n1 - ( n2 * n3 ) + in_day - 30 print "new_day ", new_day #2- calculate rising / setting time if is_rise: rise_or_set_time = new_day + ( ( 6 - ( long / 15 ) ) / 24 ) else: rise_or_set_time = new_day + ( ( 18 - ( long/ 15 ) ) / 24 ) print "rise / set", rise_or_set_time #3- calculate sun mean anamoly sun_mean_anomaly = ( 0.9856 * rise_or_set_time ) - 3.289 print "sun mean anomaly", sun_mean_anomaly #4 calculate true longitude true_long = ( sun_mean_anomaly + ( 1.916 * math.sin( math.radians( sun_mean_anomaly ) ) ) + ( 0.020 * math.sin( 2 * math.radians( sun_mean_anomaly ) ) ) + 282.634 ) print "true long ", true_long # make sure true_long is within 0, 360 if true_long < 0: true_long = true_long + 360 elif true_long > 360: true_long = true_long - 360 else: true_long print "true long (360 if) ", true_long #5 calculate s_r_a (sun_right_ascenstion) s_r_a = math.degrees( math.atan( 0.91764 * math.tan( math.radians( true_long ) ) ) ) print "s_r_a is ", s_r_a #make sure it's between 0 and 360 if s_r_a < 0: s_r_a = s_r_a + 360 elif true_long > 360: s_r_a = s_r_a - 360 else: s_r_a print "s_r_a (modified) is ", s_r_a # s_r_a has to be in the same Quadrant as true_long true_long_quad = ( math.floor( true_long / 90 ) ) * 90 s_r_a_quad = ( math.floor( s_r_a / 90 ) ) * 90 s_r_a = s_r_a + ( true_long_quad - s_r_a_quad ) print "s_r_a (quadrant) is ", s_r_a # convert s_r_a to hours s_r_a = s_r_a / 15 print "s_r_a (to hours) is ", s_r_a #6- calculate sun diclanation in terms of cos and sin sin_declanation = 0.39782 * math.sin( math.radians ( true_long ) ) cos_declanation = math.cos( math.asin( sin_declanation ) ) print " sin/cos declanations ", sin_declanation, ", ", cos_declanation # sun local hour cos_hour = ( math.cos( math.radians( zenith ) ) - ( sin_declanation * math.sin( math.radians ( lat ) ) ) / ( cos_declanation * math.cos( math.radians ( lat ) ) ) ) print "cos_hour ", cos_hour # extreme north / south if cos_hour > 1: print "Sun Never Rises at this location on this date, exiting" # sys.exit() elif cos_hour < -1: print "Sun Never Sets at this location on this date, exiting" # sys.exit() print "cos_hour (2)", cos_hour #7- sun/set local time calculations if is_rise: sun_local_hour = ( 360 - math.degrees(math.acos( cos_hour ) ) ) / 15 else: sun_local_hour = math.degrees( math.acos( cos_hour ) ) / 15 print "sun local hour ", sun_local_hour sun_event_time = sun_local_hour + s_r_a - ( 0.06571 * rise_or_set_time ) - 6.622 print "sun event time ", sun_event_time #final result time_in_utc = sun_event_time - ( long / 15 ) + utc_time_zone return time_in_utc #test through main def main(): print "Time of day App " # test: fredericton, NB # answer: 7:34 am long = 66.6 lat = -45.9 utc_time = -4 d = 3 m = 3 y = 2010 is_rise = True tod = TimeOfDay() print "TOD is ", tod.calculate_time(d, m, y, lat, long, is_rise, utc_time) if __name__ == "__main__": main()

    Read the article

  • reverse this function

    - by ooo
    i have code that takes a csharp datetime and converts it into a long to plot in the "flot" graph. here is the code public static long GetJavascriptTimestamp(DateTime input) { TimeSpan span = new TimeSpan(DateTime.Parse("1/1/1970").Ticks); DateTime time = input.Subtract(span); return (long)(time.Ticks / 10000); } I now need an opposite function where i take this long value and get the csharp datetime object back. any idea if the above method can be reversed ?

    Read the article

  • java casting confusion

    - by Stardust
    Could anyone please tell me why the following casting is resulting in compile time error: Long l = (Long)Math.pow(5,2); But why not the following: long l = (long)Math.pow(5,2);

    Read the article

  • Objective-C Decimal to Base 16 Hex conversion

    - by JustinXXVII
    Does anyone have a code snippet or a class that will take a long long and turn it into a 16 byte Hex string? I'm looking to turn data like this long long decimalRepresentation = 1719886131591410351; and turn it into this //Base 16 Hex Output: 17DE435307A07300 The %x operator doesn't want to work for me NSLog(@"Hex: %x",decimalRepresentation); //console : "Hex: 7a072af" As you can see that's not even close. Any help is truly appreciated!

    Read the article

  • c# interop with ghostscript

    - by yodaj007
    I'm trying to access some Ghostscript functions like so: [DllImport(@"C:\Program Files\GPLGS\gsdll32.dll", EntryPoint = "gsapi_revision")] public static extern int Foo(gsapi_revision_t x, int len); public struct gsapi_revision_t { [MarshalAs(UnmanagedType.LPTStr)] string product; [MarshalAs(UnmanagedType.LPTStr)] string copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); Foo(foo, Marshal.SizeOf(foo)); This corresponds with these definitions from the iapi.h header from ghostscript: typedef struct gsapi_revision_s { const char *product; const char *copyright; long revision; long revisiondate; } gsapi_revision_t; GSDLLEXPORT int GSDLLAPI gsapi_revision(gsapi_revision_t *pr, int len); But my code is reading nothing into the string fields. If I add 'ref' to the function, it reads gibberish. However, the following code reads in the data just fine: public struct gsapi_revision_t { IntPtr product; IntPtr copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); IntPtr x = Marshal.AllocHGlobal(20); for (int i = 0; i < 20; i++) Marshal.WriteInt32(x, i, 0); int result = Foo(x, 20); IntPtr productNamePtr = Marshal.ReadIntPtr(x); IntPtr copyrightPtr = Marshal.ReadIntPtr(x, 4); long revision = Marshal.ReadInt64(x, 8); long revisionDate = Marshal.ReadInt64(x, 12); byte[] dest = new byte[1000]; Marshal.Copy(productNamePtr, dest, 0, 1000); string name = Read(productNamePtr); string copyright = Read(copyrightPtr); } public static string Read(IntPtr p) { List<byte> bits = new List<byte>(); int i = 0; while (true) { byte b = Marshal.ReadByte(new IntPtr(p.ToInt64() + i)); if (b == 0) break; bits.Add(b); i++; } return Encoding.ASCII.GetString(bits.ToArray()); } So what am I doing wrong with marshaling?

    Read the article

  • " addressof " VB6 to VB.NET

    - by johan
    Hello, I´m having some problem to convert my VB6 project to VB.NET I dont understan how this "addressof" function should be in VB.NET My VB6 code: Declare Function MP4_ClientStart Lib "hikclient.dll" (pClientinfo As CLIENT_VIDEOINFO, ByVal abab As Long) As Long Public Sub ReadDataCallBack(ByVal nPort As Long, pPacketBuffer As Byte, ByVal nPacketSize As Long) If Not bSaved_DVS Then bSaved_DVS = True HW_OpenStream hChannelHandle, pPacketBuffer, nPacketSize End If HW_InputData hChannelHandle, pPacketBuffer, nPacketSize End Sub nn1 = MP4_ClientStart(clientinfo, AddressOf ReadDataCallBack)

    Read the article

  • Start one functon after another finished iphone

    - by user295944
    I have a function that updates the gps coordinate lat and long then sends a tweet with the lat and long info attached. I want the function to wait until the lat and long are filled before the tweet goes through but it seems to do both at the same time so the lat and long don't get filled in. How do I make the gps location update first?

    Read the article

  • JNA problem with char** (in dll)

    - by underline
    Hi, ok it is 'easy' to make jna wrapper solution for mapping exported functions within dll using jna: long f1(int x), just int long f2(char* y), just char[] but how to deal with long f3(char** z) ? I need f3's result(long) as well as z value on java side. Please don't say cpp code should be rewritten to avoid this:-)

    Read the article

  • which version of the code below is right?

    - by TheVillageIdiot
    Hi I found this function in a utilities code file: Version 1: public static bool IsValidLong(string strLong) { bool result = true; try { long tmp = long.Parse(strLong); } catch (Exception ex) { result = false; } return result; } I want to replace this (and validators for other types) with following: Version 2: public static bool IsValidLong(string strLong) { long l; return long.TryParse(strLong, out l); } which version is better and why?

    Read the article

  • Showplan Operator of the Week - Merge Interval

    When Fabiano agreed to undertake the epic task of describing each showplan operator, none of us quite predicted the interesting ways that the series helps to understand how the query optimizer works. With the Merge Interval, Fabiano comes up with some insights about the way that the Query optimizer handles overlapping ranges efficiently. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • How to Communicate Between SSBS Applications Across Instances

    Arshad Ali demonstrates how to verify the SQL Server Service Broker (SSBS) configuration when both the Initiator and Target are in different SQL Server instances, how to communicate between them and how to monitor the conversation. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Google I/O 2010 - A JIT Compiler for Android's Dalvik VM

    Google I/O 2010 - A JIT Compiler for Android's Dalvik VM Google I/O 2010 - A JIT Compiler for Android's Dalvik VM Android 301 Ben Cheng, Bill Buzbee In this session we will outline the design of a JIT Compiler suitable for embedded Android devices. Topics will include an architectural overview, the rationale for design decisions and the special support for JIT verification, testing and tuning. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 3 0 ratings Time: 01:00:14 More in Science & Technology

    Read the article

  • SQL Server DMVs in Action - Sample Chapter

    Read the first chapter of this new book on DMVs. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Row Oriented Security Using Triggers

    Handling security in an application can be a bit cumbersome. New author R Glen Cooper brings us a database design technique from the real world that can help you. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • ShowPlan Operator of the Week - Merge Join

    Did you ever wonder how and why your indexes affect the performances of joins? Once you've read Fabiano Amorim's unforgettable explanation, you'll learn to love the MERGE operator, and plan your indexes so as to allow the Query Optimiser to use it. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Troubleshoot Database Concurrency in SQL Server with sp_locks

    sp_locks is a useful tool which can help a DBA in detecting and troubleshooting blocking and concurrency scenarios. This article demonstrates a worked example of using sp_locks to troubleshoot a database concurrency issue. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Working with SQL Server Profiler Trace Files

    In a previous tip we looked at the steps to Create a Trace Template in Profiler. In this tip we will look at a few more tips such as creating a trace template from an existing trace file and saving a trace file to a SQL Server table. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >