Search Results

Search found 1349 results on 54 pages for 'sec'.

Page 15/54 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • In 10.10, USB 3.0 PCI Express card recognized by lspci but not lsusb or dmesg. How to fix?

    - by Paul
    Asus N PC, runs 10.10 x86_64 The Asus N comes with 4 usb 2.0 ports, each labelled 2.0 on the case. Attempting to add two usb 3.0 ports to be provided by a generic usb 3.0 pci express card installed in the pci expres slot. The new card says usb 3.0 and has the blue ports. The card is installed into the laptop unpowered, then the laptop is powered on and boots normally. Nothing happens when a USB 3.0 flash drive is inserted into the usb 3.0 port. uname -a Linux drpaulbrewer-N90SV 2.6.35.8 #1 SMP Fri Jan 14 15:54:11 EST 2011 x86_64 GNU/Linux lspci -v 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64 Kernel modules: sis-agp 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fa000000-fdefffff Prefetchable memory behind bridge: 00000000d0000000-00000000dfffffff Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [f4] Power Management version 2 Capabilities: [70] Subsystem: Silicon Integrated Systems [SiS] PCI-to-PCI bridge Kernel driver in use: pcieport 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) Flags: bus master, medium devsel, latency 0 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) (prog-if 80 [Master]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 128 I/O ports at 01f0 [size=8] I/O ports at 03f4 [size=1] I/O ports at 0170 [size=8] I/O ports at 0374 [size=1] I/O ports at ffe0 [size=16] Capabilities: [58] Power Management version 2 Kernel driver in use: pata_sis 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10 [OHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 20 Memory at f9fff000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10 [OHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 21 Memory at f9ffe000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller (prog-if 20 [EHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 22 Memory at f9ffd000 (32-bit, non-prefetchable) [size=4K] Capabilities: [50] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 Ethernet controller: Silicon Integrated Systems [SiS] 191 Gigabit Ethernet Adapter (rev 02) Subsystem: ASUSTeK Computer Inc. Device 11f5 Flags: bus master, medium devsel, latency 0, IRQ 19 Memory at f9ffcc00 (32-bit, non-prefetchable) [size=128] I/O ports at cc00 [size=128] Capabilities: [40] Power Management version 2 Kernel driver in use: sis190 Kernel modules: sis190 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 17 I/O ports at c800 [size=8] I/O ports at c400 [size=4] I/O ports at c000 [size=8] I/O ports at bc00 [size=4] I/O ports at b800 [size=16] I/O ports at b400 [size=128] Capabilities: [58] Power Management version 2 Kernel driver in use: sata_sis Kernel modules: sata_sis 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 Memory behind bridge: fdf00000-fdffffff Capabilities: [b0] Subsystem: Silicon Integrated Systems [SiS] Device 0004 Capabilities: [c0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [f4] Power Management version 2 Kernel driver in use: pcieport 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=06, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: fe000000-febfffff Prefetchable memory behind bridge: 00000000f6000000-00000000f8ffffff Capabilities: [b0] Subsystem: Silicon Integrated Systems [SiS] Device 0004 Capabilities: [c0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [f4] Power Management version 2 Kernel driver in use: pcieport 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller Subsystem: ASUSTeK Computer Inc. Device 17b3 Flags: bus master, medium devsel, latency 0, IRQ 18 Memory at f9ff4000 (32-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce GT 130M] (rev a1) (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 2021 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fc000000 (32-bit, non-prefetchable) [size=16M] Memory at d0000000 (64-bit, prefetchable) [size=256M] Memory at fa000000 (64-bit, non-prefetchable) [size=32M] I/O ports at dc00 [size=128] [virtual] Expansion ROM at fde80000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Kernel driver in use: nvidia Kernel modules: nvidia-current, nouveau, nvidiafb 02:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Device 1a3b:1067 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fdff0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Kernel driver in use: ath9k Kernel modules: ath9k 03:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at febfe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 lsusb Bus 003 Device 002: ID 0b05:1751 ASUSTek Computer, Inc. BT-253 Bluetooth Adapter Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 0bda:0158 Realtek Semiconductor Corp. USB 2.0 multicard reader Bus 001 Device 002: ID 04f2:b071 Chicony Electronics Co., Ltd 2.0M UVC Webcam / CNF7129 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub dmesg trying to post dmesg exceeded the stackexchange posting limit of 30K... but nothing there is usb 3.0

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • C# - Repeating a method call using timers

    - by Jeremy Rudd
    In a VSTO add-in I'm developing, I need to execute a method with a specific delay. The tricky part is that the method may take anywhere from 0.1 sec to 1 sec to execute. I'm currently using a System.Timers.Timer like this: private Timer tmrRecalc = new Timer(); // tmrRecalc.Interval = 500 milliseconds private void tmrRecalc_Elapsed(object sender, System.Timers.ElapsedEventArgs e){ // stop the timer, do the task     tmrRecalc.Stop();         Calc.recalcAll();         // restart the timer to repeat after 500 ms     tmrRecalc.Start(); } Which basically starts, raises 1 elapse event after which it is stopped for the arbitrary length task is executed. But the UI thread seems to hang up for 3-5 seconds between each task. Do Timers have a 'warm-up' time to start? Is that why it takes so long for its first (and last) elapse? Which type of timer do I use instead?

    Read the article

  • socat usage for FIFO speed vs socket speed on localhost

    - by Fishy
    Hello, As per a suggestion on stackoverflow, to compare IPC on a single machine using a) sockets (TCP) on localhost to localhost b) using FIFOs (between Java and C) To answer (a), I used netcat to gauge transfer speed (91 MBytes/sec)[1] (b) Q: How can I test FIFO write speed using socat? My approach(where /tmp/gus is created using mkfifo on RHEL): dd if=/dev/zero of=/tmp/gus bs=1G count=1 but i get: 1073741824 bytes (1.1 GB) copied, 1.1326 seconds, 948 MB/s Does this mean writing to a FIFO ~10 times faster? Or is my experiment completely wrong ? Thank you Sporsi [1] From machine A to B across 1Gbps link, this number dropped to ~80 MBytes/sec - I expected localhost to be much higher ...

    Read the article

  • Getting useful emails from Hudson instead of tail of ant log

    - by Rick
    A team member of mine recently setup some Hudson continuous-integration builds for a number of our development code bases. It uses the built in ant integration configured in simple way. While, it is very helpful and I recommend it strongly, I was wondering how to get more more concise/informative/useful emails instead of just the tail of the ant build log. E.G., Don't want this: > [...truncated 36530 lines...] > [junit] Tests run: 32, Failures: 0, Errors: 0, Time elapsed: 0.002 sec ... (hundred of lines omitted) ... > [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 0.001 sec > [junit] Tests FAILED > > BUILD FAILED I assume, that I could skip the build-in ant support and send the build log through a grep script, but I was hoping there was a more integrated or elegant option.

    Read the article

  • Why isn't my query using any indices when I use a subquery?

    - by sfussenegger
    I have the following tables (removed columns that aren't used for my examples): CREATE TABLE `person` ( `id` int(11) NOT NULL, `name` varchar(1024) NOT NULL, `sortname` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `sortname` (`sortname`(255)), KEY `name` (`name`(255)) ); CREATE TABLE `personalias` ( `id` int(11) NOT NULL, `person` int(11) NOT NULL, `name` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `person` (`person`), KEY `name` (`name`(255)) ) Currently, I'm using this query which works just fine: select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; mysql> explain select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | p | index_merge | name,sortname | name,sortname | 767,767 | NULL | 3 | Using sort_union(name,sortname); Using where | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ 1 row in set (0.00 sec) Now I'd like to extend this query to also consider aliases. First, I've tried using a join: select p.* from person p join personalias a where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; mysql> explain select p.* from person p join personalias a on p.id = a.person where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | 1 | SIMPLE | a | ALL | ref,name | NULL | NULL | NULL | 87401 | Using temporary | | 1 | SIMPLE | p | eq_ref | PRIMARY,name,sortname | PRIMARY | 4 | musicbrainz.a.ref | 1 | Using where | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ 2 rows in set (0.00 sec) This looks bad: no index, 87401 rows, using temporary. Using temporary only appears when I use distinct, but as an alias might be the same as the name, I can't really get rid of it. Next, I've tried to replace the join with a subquery: select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select person from personalias a where a.name = 'John Mayer'); mysql> explain select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select id from personalias a where a.name = 'John Mayer'); +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | 1 | PRIMARY | p | ALL | name,sortname | NULL | NULL | NULL | 540309 | Using where | | 2 | DEPENDENT SUBQUERY | a | index_subquery | person,name | person | 4 | func | 1 | Using where | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ 2 rows in set (0.00 sec) Again, this looks pretty bad: no index, 540309 rows. Interestingly, both queries (select p.* from person ... or p.id in (4711,12345) and select id from personalias a where a.name = 'John Mayer') work extremely well. Why doesn't MySQL use any indices for both of my queries? What else could I do? Currently, it looks best to fetch person.ids for aliases and add them statically as an in(...) to the second query. There certainly has to be another way to do this with a single query. I'm currently out of ideas though. Could I somehow force MySQL into using another (better) query plan?

    Read the article

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • Table alias has an effect with execution time?

    - by RedFux227
    So I have this query : SELECT A.e_cd "INFLATE" , A.ccgpt "1" , ( SELECT J.comp FROM JBB J WHERE J.e_cd=A.e_cd AND emplo_RCD=:1 AND J.Dte = ( SELECT MAX(J1.Dte) FROM JBB J1 WHERE J.e_cd=J1.e_cd AND TO_CHAR(J1.Dte,'MM') <=A.mth_to AND TO_CHAR(J1.Dte,'YYYY') <=A.year ) AND J.seq = ( SELECT MAX(J2.seq) FROM PS_JOB J2 WHERE J2.e_cd=J.e_cd AND J2.Dte=J.Dte)) "Company" FROM PPS A WHERE orcd=:2 AND rcd=:3 AND tx_cd=:4 AND year=:5 With this one : SELECT A.e_cd "INFLATE" , A.ccgpt "1" , ( SELECT J.comp FROM JBB J WHERE J.e_cd=A.e_cd AND emplo_RCD=:1 AND J.Dte = ( SELECT MAX(J1.Dte) FROM JBB J1 WHERE J.e_cd=J1.e_cd AND TO_CHAR(J1.Dte,'MM') <=A.mth_to AND TO_CHAR(J1.Dte,'YYYY') <=A.year ) AND J.seq = ( SELECT MAX(J1.seq) **FROM PS_JOB J1 WHERE J1.e_cd=J.e_cd AND J1.Dte=J.Dte**)) "Company" FROM PPS A WHERE orcd=:2 AND rcd=:3 AND tx_cd=:4 AND year=:5 The 1st query run for about 3.20 sec with buffer gets 8,134 and for the 2nd query it run for about 1.73 sec with buffer gets 7,006. So, is the table alias somehow has an impact on the execution time / buffer gets? Thanks in advance!

    Read the article

  • agent-based simulation: performance issue: Python vs NetLogo & Repast

    - by max
    I'm replicating a small piece of Sugarscape agent simulation model in Python 3. I found the performance of my code is ~3 times slower than that of NetLogo. Is it likely the problem with my code, or can it be the inherent limitation of Python? Obviously, this is just a fragment of the code, but that's where Python spends two-thirds of the run-time. I hope if I wrote something really inefficient it might show up in this fragment: UP = (0, -1) RIGHT = (1, 0) DOWN = (0, 1) LEFT = (-1, 0) all_directions = [UP, DOWN, RIGHT, LEFT] # point is just a tuple (x, y) def look_around(self): max_sugar_point = self.point max_sugar = self.world.sugar_map[self.point].level min_range = 0 random.shuffle(self.all_directions) for r in range(1, self.vision+1): for d in self.all_directions: p = ((self.point[0] + r * d[0]) % self.world.surface.length, (self.point[1] + r * d[1]) % self.world.surface.height) if self.world.occupied(p): # checks if p is in a lookup table (dict) continue if self.world.sugar_map[p].level > max_sugar: max_sugar = self.world.sugar_map[p].level max_sugar_point = p if max_sugar_point is not self.point: self.move(max_sugar_point) Roughly equivalent code in NetLogo (this fragment does a bit more than the Python function above): ; -- The SugarScape growth and motion procedures. -- to M ; Motion rule (page 25) locals [ps p v d] set ps (patches at-points neighborhood) with [count turtles-here = 0] if (count ps > 0) [ set v psugar-of max-one-of ps [psugar] ; v is max sugar w/in vision set ps ps with [psugar = v] ; ps is legal sites w/ v sugar set d distance min-one-of ps [distance myself] ; d is min dist from me to ps agents set p random-one-of ps with [distance myself = d] ; p is one of the min dist patches if (psugar >= v and includeMyPatch?) [set p patch-here] setxy pxcor-of p pycor-of p ; jump to p set sugar sugar + psugar-of p ; consume its sugar ask p [setpsugar 0] ; .. setting its sugar to 0 ] set sugar sugar - metabolism ; eat sugar (metabolism) set age age + 1 end On my computer, the Python code takes 15.5 sec to run 1000 steps; on the same laptop, the NetLogo simulation running in Java inside the browser finishes 1000 steps in less than 6 sec. EDIT: Just checked Repast, using Java implementation. And it's also about the same as NetLogo at 5.4 sec. Recent comparisons between Java and Python suggest no advantage to Java, so I guess it's just my code that's to blame? EDIT: I understand MASON is supposed to be even faster than Repast, and yet it still runs Java in the end.

    Read the article

  • set Image to Button

    - by Ivan
    Hello all, Could somebody help me a little bit with my issue below? When I call the myFunction, images which I want to set to buttons appear after 2 sec simultaneously, not one by one with delay of 0.5 sec. More info: generatedNumbers is array with four elements of NSNumber (4,1,3,2) buttons are set in UIView via IB and are tagged (1,2,3,4) -(IBAction) myFunction:(id) sender { int i, value; for (i = 0; i<[generatedNumbers count]; i++) { value = [[generatedNumbers objectAtIndex:i] intValue]; UIButton *button = (UIButton *)[self.view viewWithTag:i+1]; UIImage *img = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png",value]]; [button setImage:img forState:UIControlStateNormal]; [img release]; usleep(500000); } }

    Read the article

  • In SQL / MySQL, can a Left Outer Join be used to find out the duplicates when there is no Primary ID

    - by Jian Lin
    I would like to try using Outer Join to find out duplicates in a table: If a table has Primary Index ID, then the following outer join can find out the duplicate names: mysql> select * from gifts; +--------+------------+-----------------+---------------------+ | giftID | name | filename | effectiveTime | +--------+------------+-----------------+---------------------+ | 2 | teddy bear | bear.jpg | 2010-04-24 04:36:03 | | 3 | coffee | coffee123.jpg | 2010-04-24 05:10:43 | | 6 | beer | beer_glass.png | 2010-04-24 05:18:12 | | 10 | heart | heart_shape.jpg | 2010-04-24 05:11:29 | | 11 | ice tea | icetea.jpg | 2010-04-24 05:19:53 | | 12 | cash | cash.png | 2010-04-24 05:27:44 | | 13 | chocolate | choco.jpg | 2010-04-25 04:04:31 | | 14 | coffee | latte.jpg | 2010-04-27 05:49:52 | | 15 | coffee | espresso.jpg | 2010-04-27 06:03:03 | +--------+------------+-----------------+---------------------+ 9 rows in set (0.00 sec) mysql> select * from gifts g1 LEFT JOIN (select * from gifts group by name) g2 on g1.giftID = g2.giftID where g2.giftID IS NULL; +--------+--------+--------------+---------------------+--------+------+----------+---------------+ | giftID | name | filename | effectiveTime | giftID | name | filename | effectiveTime | +--------+--------+--------------+---------------------+--------+------+----------+---------------+ | 14 | coffee | latte.jpg | 2010-04-27 05:49:52 | NULL | NULL | NULL | NULL | | 15 | coffee | espresso.jpg | 2010-04-27 06:03:03 | NULL | NULL | NULL | NULL | +--------+--------+--------------+---------------------+--------+------+----------+---------------+ 2 rows in set (0.00 sec) But what if the table doesn't have a Primary Index ID, then can an outer join still be used to find out duplicates?

    Read the article

  • Schedule task in android

    - by Maneesh
    I am using below code for scheduling a task in android but its not giving any results. Please advise on the same. int delay = 5000; // delay for 5 sec. int period = 1000; // repeat every sec. Timer timer = new Timer(); timer.scheduleAtFixedRate(new TimerTask() { public void run() { Toast.makeText(getApplicationContext(),"RUN!",Toast.LENGTH_SHORT).show(); } }, delay, period);

    Read the article

  • Adobe Livecycle performance

    - by Fabio
    I have installed Adobe Livecycle in order to convert MSWORD files to PDF from a web-app. Specifically, I use the DocConverter tool. Previously I have used OpenOffice UNO SDK, but I have found some problems with particular documents. Now, the conversion is ok, but the conversion time is huge. These are the times to convert documents of different sizes via Openoffice and via Livecycle. Could you suggest anything? SIZE (bytes) Openoffice (sec) Adobe LiveCycle (sec) 24064 1 8 50688 0 3 100864 0 3 253952 0 5 509440 1 5 1017856 5 18 2098688 8 10 4042240 19 45 6281216 0 9 8212480 32 125

    Read the article

  • I need programmatically way to perform fastest 'trans-wrap' of mov to mp4 on iPhone/iPad application

    - by user1307877
    I want to change the container of a .mov video files that I pick using  UIImagePickerController and compressed them via AVAssetExportSession with AVAssetExportPresetMediumQuality and  shouldOptimizeForNetworkUse = YES to .mp4 container. I need programmatically way/sample code to perform a fastest 'trans-wrap' on iPhone/iPad application I tried to set AVAssetExportSession.outputFileType property to AVFileTypeMPEG4 but it not supported and I got exception I tried to do this transform using AVAssetWriter by specifying fileType:AVFileTypeMPEG4, actually I got .mp4 output file, but it was not 'wrap-trans', the output file was  3x bigger than source, and the convert process took 128 sec for video with 60 sec duration. I need solution that will run quickly and will keep the file size  please help

    Read the article

  • Generate a list of file names based on month and year arithmetic

    - by MacUsers
    How can I list the numbers 01 to 12 (one for each of the 12 months) in such a way so that the current month always comes last where the oldest one is first. In other words, if the number is grater than the current month, it's from the previous year. e.g. 02 is Feb, 2011 (the current month right now), 03 is March, 2010 and 09 is Sep, 2010 but 01 is Jan, 2011. In this case, I'd like to have [09, 03, 01, 02]. This is what I'm doing to determine the year: for inFile in os.listdir('.'): if inFile.isdigit(): month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) I don't have a clue what to do next. What should I do here? Update: I think, I better upload the entire script for better understanding. #!/usr/bin/env python import os, sys from time import strftime from calendar import month_abbr vGroup = {} vo = "group_lhcb" SI00_fig = float(2.478) months = tuple(month_abbr) print "\n%-12s\t%10s\t%8s\t%10s" % ('VOs','CPU-time','CPU-time','kSI2K-hrs') print "%-12s\t%10s\t%8s\t%10s" % ('','(in Sec)','(in Hrs)','(*2.478)') print "=" * 58 for inFile in os.listdir('.'): if inFile.isdigit(): readFile = open(inFile, 'r') lines = readFile.readlines() readFile.close() month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) for line in lines[2:]: if line.find(vo)==0: g, i = line.split() s = vGroup.get(g, 0) vGroup[g] = s + int(i) sumHrs = ((vGroup[g]/60)/60) sumSi2k = sumHrs*SI00_fig print "%-12s\t%10s\t%8s\t%10.2f" % (mnYear,vGroup[g],sumHrs,sumSi2k) del vGroup[g] When I run the script, I get this: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ================================================== Jan, 2011 211201372 58667 145376.83 Dec, 2010 5064337 1406 3484.07 Feb, 2011 17506049 4862 12048.04 Sep, 2010 210874275 58576 145151.33 As I said in the original post, I like the result to be in this order instead: Sep, 2010 210874275 58576 145151.33 Dec, 2010 5064337 1406 3484.07 Jan, 2011 211201372 58667 145376.83 Feb, 2011 17506049 4862 12048.04 The files in the source directory reads like this: [root@serv07 usage]# ls -l total 3632 -rw-r--r-- 1 root root 1144972 Feb 9 19:23 01 -rw-r--r-- 1 root root 556630 Feb 13 09:11 02 -rw-r--r-- 1 root root 443782 Feb 11 17:23 02.bak -rw-r--r-- 1 root root 1144556 Feb 14 09:30 09 -rw-r--r-- 1 root root 370822 Feb 9 19:24 12 Did I give a better picture now? Sorry for not being very clear in the first place. Cheers!! Update @Mark Ransom This is the result from Mark's suggestion: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ========================================================== Dec, 2010 5064337 1406 3484.07 Sep, 2010 210874275 58576 145151.33 Feb, 2011 17506049 4862 12048.04 Jan, 2011 211201372 58667 145376.83 As I said before, I'm looking for the result to b printed in this order: Sep, 2010 - Dec, 2010 - Jan, 2011 - Feb, 2011 Cheers!!

    Read the article

  • How do I add a one-to-one relationship in MYSQL?

    - by alex
    +-------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | pid | varchar(99) | YES | | NULL | | +-------+-------------+------+-----+---------+-------+ 1 row in set (0.00 sec) +-------+---------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------------+------+-----+---------+-------+ | pid | varchar(2000) | YES | | NULL | | | recid | varchar(2000) | YES | | NULL | | +-------+---------------+------+-----+---------+-------+ 2 rows in set (0.00 sec) This is my table. pid is just the id of the user. "recid" is a recommended song for that user. I hope to have a list of pid's, and then recommended songs for each person. Of course, in my 2nd table, (pid, recid) would be unique key. How do I do a one-to-one query for this ?

    Read the article

  • How to join two wav file using python??

    - by kaushik
    I am using python programming language,I want to join to wav file one at the end of other wav file? I have a Question in the forum which suggest how to merge two wav file i.e add the contents of one wav file at certain offset,but i want to join two wav file at the end of each other... And also i had a prob playing the my own wav file,using winsound module..I was able to play the sound but using the time.sleep for certain time before playin any windows sound,disadvantage wit this is if i wanted to play a sound longer thn time.sleep(N),N sec also,the windows sound wil jst overlap after N sec play the winsound nd stop.. Can anyone help??please kindly suggest to how to solve these prob... Thanks in advance

    Read the article

  • How to skip all the column names in MySQL when the table has auto increment primary key?

    - by Jian Lin
    A table is: mysql> desc gifts; +---------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+-------------+------+-----+---------+----------------+ | giftID | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(80) | YES | | NULL | | | filename | varchar(80) | YES | | NULL | | | effectiveTime | datetime | YES | | NULL | | +---------------+-------------+------+-----+---------+----------------+ the following is ok: mysql> insert into gifts -> values (10, "heart", "heart_shape.jpg", now()); Query OK, 1 row affected (0.05 sec) but is there a way to not specify the "10"... and just let each one be 11, 12, 13... ? I can do it using mysql> insert into gifts (name, filename, effectiveTime) -> values ("coffee", "coffee123.jpg", now()); Query OK, 1 row affected (0.00 sec) but the column names need to be all specified. Is there a way that they don't have to be specified and the auto increment of primary key still works? thanks.

    Read the article

  • Weblogic Apache plugin and session stickiness

    - by h4tech
    If two webserver are configured in between a load balancer and weblogic cluster, will the two Apache server maintain session stickiness.?? Say for e.g. the load balancer forwards the first request to the 1st apache and inturn 1st apache forwards to 1st WL managed instance.Even if the second req from the same user is forwarded by the load balancer to the second apache, will the sec apache be able to forward it to the 1st WLManaged instance which served the first request rather than the sec WLManaged instance which is not aware of the session information at all.What should ideally be the behaviour of weblogic apache plugin??.Catch is i dont want ot enable the session replication @ the wl server cluster..Pls help.

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • mysql foreign key problem.

    - by JP19
    Hi, What is wrong with the foreign key addition here: mysql> create table notes ( id int (11) NOT NULL auto_increment PRIMARY KEY, note_type_id smallint(5) NOT NULL, data TEXT NOT NULL, created_date datetime NOT NULL, modified_date timestamp NOT NULL on update now()) Engine=InnoDB; Query OK, 0 rows affected (0.08 sec) mysql> create table notetypes ( id smallint (5) NOT NULL auto_increment PRIMARY KEY, type varchar(255) NOT NULL UNIQUE) Engine=InnoDB; Query OK, 0 rows affected (0.00 sec) mysql> alter table `notes` add constraint foreign key(`note_type_id`) references `notetypes`.`id` on update cascade on delete restrict; ERROR 1005 (HY000): Can't create table './admin/#sql-43e_b762.frm' (errno: 150) Thanks JP

    Read the article

  • Mysql query taking too much time

    - by aditya
    I have problem related to mysql database. i am linux webserver admin and i am facing a problem with a mysql query. The database is very small. I tried to track in logs and found that a query is taking minimum 5 sec to respond . The first page of site is coming from the database. Client are using cms. when the server gets some number of hits database server starts to give response very slowly and wait time increases from 5 sec to several seconds. I checked slow query logs { Query_time: 11.480138 Lock_time: 0.003837 Rows_sent: 921 Rows_examined: 3333 SET timestamp=1346656767; SELECT `Tender`.`id`, `Tender`.`department_id`, `Tender`.`title_english`, `Tender`.`content_english`, `Tender`.`title_hindi`, `Tender`.`content_hindi`, `Tender`.`file_name`, `Tender`.`start_publish`, `Tender`.`end_publish`, `Tender`.`publish`, `Tender`.`status`, `Tender`.`createdBy`, `Tender`.`created`, `Tender`.`modifyBy`, `Tender`.`modified` FROM `mcms_tenders` AS `Tender` WHERE `Tender`.`department_id` IN ( 31, 33, 32, 30 ); } Every line in the log is same only there is diff in Query time. Is there any way tweak the performance?

    Read the article

  • How to implement a timer for regular events?

    - by Torben Jonas
    I would like to implement some timers into my application. My goal is to provide an easy way to execute some function every x seconds/minutes so I thought about implementing a 1 sec, 5 sec and 15 seconds timer. The first thing i would like to update every 1 second is the built in clock (don't know if there is any other solution in c#, used this method in c++) Another use would be e.g. a sync function etc. which shall be executed every xx seconds. My question is if there are any useful tutorials on this topic? It is the first time that I would like to implement such an timer system into one of my applications and I do not know if there are any things I have to keep in mind. Thank you in advance for any answer :)

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >