Search Results

Search found 22358 results on 895 pages for 'django raw query'.

Page 452/895 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Thoughts on my new template language/HTML generator?

    - by Ralph
    I guess I should have pre-faced this with: Yes, I know there is no need for a new templating language, but I want to make a new one anyway, because I'm a fool. That aside, how can I improve my language: Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). You will also be able to include partial templates like: for(@item in @item_list) include("item_partial", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name "item" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? This would make it more clear that the "tag" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. Do you like the attribute syntax? (round brackets) How should I do template inheritance/layouts? In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special $content_for_layout variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take Filtered variables inside quotes: "xxx {@var|filter} yyy" "xxx @{var|filter} yyy" "xxx @var|filter yyy" i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like @var|filter:arg="x", then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Remote Desktop (Vino-Server) connects but display doesn't work?

    - by kmassada
    Ubuntu comes default with vino-server, I can remote into my machine, and connect to it, however, the display inside my remote client, is a mirror of my own desktop. I tried using one monitor, thinking that's what is the issue but still won't work. (vino-server:3608): EggSMClient-CRITICAL **: egg_sm_client_set_mode: assertion `global_client == NULL || global_client_mode == EGG_SM_CLIENT_MODE_DISABLED' failed 25/07/2012 12:23:58 PM Autoprobing TCP port in (all) network interface 25/07/2012 12:23:58 PM Listening IPv6://[::]:5900 25/07/2012 12:23:58 PM Listening IPv4://0.0.0.0:5900 25/07/2012 12:23:58 PM Autoprobing selected port 5900 25/07/2012 12:23:58 PM Advertising security type: 'TLS' (18) 25/07/2012 12:23:58 PM Re-binding socket to listen for VNC connections on TCP port 5900 in (all) interface 25/07/2012 12:23:58 PM Listening IPv6://[::]:5900 25/07/2012 12:23:58 PM Listening IPv4://0.0.0.0:5900 25/07/2012 12:23:58 PM Clearing securityTypes 25/07/2012 12:23:58 PM Advertising security type: 'TLS' (18) 25/07/2012 12:23:58 PM Clearing securityTypes 25/07/2012 12:23:58 PM Advertising security type: 'TLS' (18) 25/07/2012 12:23:58 PM Advertising authentication type: 'No Authentication' (1) 25/07/2012 12:23:58 PM Re-binding socket to listen for VNC connections on TCP port 5900 in (all) interface 25/07/2012 12:23:58 PM Listening IPv6://[::]:5900 25/07/2012 12:23:58 PM Listening IPv4://0.0.0.0:5900 25/07/2012 12:23:58 PM Clearing securityTypes 25/07/2012 12:23:58 PM Clearing authTypes 25/07/2012 12:23:58 PM Advertising security type: 'TLS' (18) 25/07/2012 12:23:58 PM Advertising authentication type: 'VNC Authentication' (2) 25/07/2012 12:23:58 PM Clearing securityTypes 25/07/2012 12:23:58 PM Clearing authTypes 25/07/2012 12:23:58 PM Advertising security type: 'TLS' (18) 25/07/2012 12:23:58 PM Advertising authentication type: 'VNC Authentication' (2) 25/07/2012 12:23:58 PM Advertising security type: 'VNC Authentication' (2) (vino-server:3608): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. (vino-server:3608): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. 25/07/2012 12:24:16 PM [IPv4] Got connection from client static-XXXX.bltmmd.fios.verizon.net 25/07/2012 12:24:16 PM other clients: 25/07/2012 12:24:29 PM Client Protocol Version 3.7 25/07/2012 12:24:29 PM Advertising security type 18 25/07/2012 12:24:29 PM Advertising security type 2 25/07/2012 12:24:30 PM Client returned security type 18 25/07/2012 12:24:30 PM Advertising authentication type 2 25/07/2012 12:24:30 PM Client returned authentication type 2 25/07/2012 12:24:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type -258 25/07/2012 12:24:37 PM Enabling NewFBSize protocol extension for client static-XXXX.bltmmd.fios.verizon.net 25/07/2012 12:24:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type 1464686185 25/07/2012 12:24:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type -259 25/07/2012 12:24:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type -257 (vino-server:3608): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. 25/07/2012 12:24:55 PM Client static-XXXX.bltmmd.fios.verizon.net gone 25/07/2012 12:24:55 PM Statistics: 25/07/2012 12:24:55 PM key events received 0, pointer events 80 25/07/2012 12:24:55 PM framebuffer updates 43, rectangles 152, bytes 292401 25/07/2012 12:24:55 PM tight rectangles 152, bytes 292401 25/07/2012 12:24:55 PM raw bytes equivalent 11621332, compression ratio 39.744502 25/07/2012 12:25:21 PM [IPv4] Got connection from client static-XXXX.bltmmd.fios.verizon.net 25/07/2012 12:25:21 PM other clients: 25/07/2012 12:25:28 PM Client Protocol Version 3.7 25/07/2012 12:25:28 PM Advertising security type 18 25/07/2012 12:25:28 PM Advertising security type 2 25/07/2012 12:25:28 PM Client returned security type 18 25/07/2012 12:25:29 PM Advertising authentication type 2 25/07/2012 12:25:29 PM Client returned authentication type 2 25/07/2012 12:25:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type -258 25/07/2012 12:25:37 PM Enabling NewFBSize protocol extension for client static-XXXX.bltmmd.fios.verizon.net 25/07/2012 12:25:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type 1464686185 25/07/2012 12:25:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type -259 25/07/2012 12:25:37 PM rfbProcessClientNormalMessage: ignoring unknown encoding type -257 (vino-server:3608): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. 25/07/2012 12:25:47 PM Client static-XXXX.bltmmd.fios.verizon.net gone 25/07/2012 12:25:47 PM Statistics: 25/07/2012 12:25:47 PM key events received 0, pointer events 7283 25/07/2012 12:25:47 PM framebuffer updates 27, rectangles 82, bytes 113354 25/07/2012 12:25:47 PM tight rectangles 82, bytes 113354 25/07/2012 12:25:47 PM raw bytes equivalent 5831432, compression ratio 51.444431 couple of things I notice, the following error occurs over and over again. the menu error seems to be caused by ubuntu, similar problems occur http://trac.wxwidgets.org/ticket/14292, (vino-server:3608): LIBDBUSMENU-GLIB-WARNING **: Trying to remove a child that doesn't believe we're it's parent. the second one also seem to be a display related issue, can't seem to figure out a solution. I really rather try to fix this issue than have to use the other vnc clients most suggest. (vino-server:3608): EggSMClient-CRITICAL **: egg_sm_client_set_mode: assertion `global_client == NULL || global_client_mode == EGG_SM_CLIENT_MODE_DISABLED' failed

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • Custom Text and Binary Payloads using WebSocket (TOTD #186)

    - by arungupta
    TOTD #185 explained how to process text and binary payloads in a WebSocket endpoint. In summary, a text payload may be received as public void receiveTextMessage(String message) {    . . . } And binary payload may be received as: public void recieveBinaryMessage(ByteBuffer message) {    . . .} As you realize, both of these methods receive the text and binary data in raw format. However you may like to receive and send the data using a POJO. This marshaling and unmarshaling can be done in the method implementation but JSR 356 API provides a cleaner way. For encoding and decoding text payload into POJO, Decoder.Text (for inbound payload) and Encoder.Text (for outbound payload) interfaces need to be implemented. A sample implementation below shows how text payload consisting of JSON structures can be encoded and decoded. public class MyMessage implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> {     private JsonObject jsonObject;    @Override    public MyMessage decode(String string) throws DecodeException {        this.jsonObject = new JsonReader(new StringReader(string)).readObject();               return this;    }     @Override    public boolean willDecode(String string) {        return true;    }     @Override    public String encode(MyMessage myMessage) throws EncodeException {        return myMessage.jsonObject.toString();    } public JsonObject getObject() { return jsonObject; }} In this implementation, the decode method decodes incoming text payload to MyMessage, the encode method encodes MyMessage for the outgoing text payload, and the willDecode method returns true or false if the message can be decoded. The encoder and decoder implementation classes need to be specified in the WebSocket endpoint as: @WebSocketEndpoint(value="/endpoint", encoders={MyMessage.class}, decoders={MyMessage.class}) public class MyEndpoint { public MyMessage receiveMessage(MyMessage message) { . . . } } Notice the updated method signature where the application is working with MyMessage instead of the raw string. Note that the encoder and decoder implementations just illustrate the point and provide no validation or exception handling. Similarly Encooder.Binary and Decoder.Binary interfaces need to be implemented for encoding and decoding binary payload. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • MP3 Decoding on Android

    - by Rob Szumlakowski
    Hi. We're implementing a program for Android phones that plays audio streamed from the internet. Here's approximately what we do: Download a custom encrypted format. Decrypt to get chunks of regular MP3 data. Decode MP3 data to raw PCM data in a memory buffer. Pipe the raw PCM data to an AudioTrack Our target devices so far are Droid and Nexus One. Everything works great on Nexus One, but the MP3 decode is too slow on Droid. The audio playback starts to skip if we put the Droid under load. We are not permitted to decode the MP3 data to SD card, but I know that's not our problem anyways. We didn't write our own MP3 decoder, but used MPADEC (http://sourceforge.net/projects/mpadec/). It's free and was easy to integrate with our program. We compile it with the NDK. After exhaustive analysis with various profiling tools, we're convinced that it's this decoder that is falling behind. Here's the options we're thinking about: Find another MP3 decoder that we can compile with the Android NDK. This MP3 decoder would have to be either optimized to run on mobile ARM devices or maybe use integer-only math or some other optimizations to increase performance. Since the built-in Android MediaPlayer service will take URLs, we might be able to implement a tiny HTTP server in our program and serve the MediaPlayer with the decrypted MP3s. That way we can take advantage of the built-in MP3 decoder. Get access to the built-in MP3 decoder through the NDK. I don't know if this is possible. Does anyone have any suggestions on what we can do to speed up our MP3 decoding? -- Rob Sz

    Read the article

  • Iterating through a JSON object.

    - by user327508
    [ { "title": "Baby (Feat. Ludacris) - Justin Bieber", "description": "Baby (Feat. Ludacris) by Justin Bieber on Grooveshark", "link": "http://listen.grooveshark.com/s/Baby+Feat+Ludacris+/2Bqvdq", "pubDate": "Wed, 28 Apr 2010 02:37:53 -0400", "pubTime": 1272436673, "TinyLink": "http://tinysong.com/d3wI", "SongID": "24447862", "SongName": "Baby (Feat. Ludacris)", "ArtistID": "1118876", "ArtistName": "Justin Bieber", "AlbumID": "4104002", "AlbumName": "My World (Part II);\nhttp://tinysong.com/gQsw", "LongLink": "11578982", "GroovesharkLink": "11578982", "Link": "http://tinysong.com/d3wI" }, { "title": "Feel Good Inc - Gorillaz", "description": "Feel Good Inc by Gorillaz on Grooveshark", "link": "http://listen.grooveshark.com/s/Feel+Good+Inc/1UksmI", "pubDate": "Wed, 28 Apr 2010 02:25:30 -0400", "pubTime": 1272435930 } ] That is the current JSON object I have. I am now trying to iterate through it to get the import stuff like title and link. This is where I am having trouble I cant seem to get to the content that is past the ":" i tried doing dictionary way couldn't get it. def getLastSong(user,limit): base_url = 'http://gsuser.com/lastSong/' user_url = base_url + str(user) + '/' + str(limit) + "/" raw = urllib.urlopen(user_url) json_raw= raw.readlines() json_object = json.loads(json_raw[0]) #filtering and making it look good. gsongs = [] print json_object for song in json_object[0]: print song This code prints all the information before ":" Please help. ignore the Justin Bieber track :)

    Read the article

  • USB Barcode Scanner and WM_KEYDOWN

    - by Bryce Fischer
    I am trying to write a program that can will read a barcode scanner. In addition, I need it to read the input even when the application is not the window in focus (i.e., running in system tray, etc). I found this article, titled Distinguishing Barcode Scanners from the Keyboard in WinForms, that seems to solve the exact problem. It is working pretty good, it detects my device and handles the WM_INPUT message. However, it is checking to see if the RAWINPUT.keyboard.Message is WM_KEYDOWN (0x100). It never seems to receive this. The only line of code I've altered in the code provided in the article is adding a Console.Out.WriteLine to output the actual values of that message: Console.Out.WriteLine("message: {0}", raw.keyboard.Message.ToString("X")); if (raw.keyboard.Message == NativeMethods.WM_KEYDOWN) { .... Here is what it outputs: message: B message: 1000B message: 3 message: 10003 message: 8 message: 10008 message: 3 message: 10003 message: 5 message: 10005 message: 3 message: 10003 message: 8 message: 10008 message: 8 message: 10008 message: 4 message: 10004 message: 9 message: 10009 message: 9 message: 10009 message: 3 message: 10003 The value I'm expecting to receive when this completes correctly is: 257232709 Which I verified by scanning to notepad. I don't know if the Operation System is relevant here, but I figured I should mention that I'm running this in Windows 7 64 and Visual Studio 2010 and .NET Framework 3.5. Scanner is a USB Barcode Scanner, Symbol LS2208, setup as "HID KEYBOARD EMULATION"

    Read the article

  • Silverlight 4, Out of browser, Printing, Automatic updates

    - by minal
    I have a very critial business application presently running using Winforms. The application is a very core UI shell. It accepts input data, calls a webservice on my server to do the computation, displays the results on the winforms app and finally send a print stream to the printer. Presently the application is deployed using Click-once. Moving forward, I am trying to contemplate wheather I should move the application into a Silverlight application. Couple of reasons I am thinking silverlight. Gives clients the feel that it is a cloud based solution. Can be accessed from any PC. While the clickonce app is able to do this as well, they have to install an app, and when updates are available they have to click "Yes" to update. The application presently has a drop down list of customers, this list has expanded to over 3000 records. Scrolling through the list is very painful. With Silverlight I am thinking of the auto complete ability. Out of the browser - this will be handy for those users who use the app daily. I haven't used Silverlight previous hence looking for some expert advice on a few things: Printing - does silverlight allow sending raw print data to the printer. The application prints to a Zebra Thermal label printer. I have to send raw bytes to the printer with the commands. Can this be done with SL, or will it always prompt the "Print" dialog? Out of browser - when SL apps are installed as out of browser, how to updates come through, does the app update automatically or is the user prompted to opt for update?

    Read the article

  • How can I return json from my WCF rest service (.NET 4), using Json.Net, without it being a string,

    - by Samuel Meacham
    The DataContractJsonSerializer is unable to handle many scenarios that Json.Net handles just fine when properly configured (specifically, cycles). A service method can either return a specific object type (in this case a DTO), in which case the DataContractJsonSerializer will be used, or I can have the method return a string, and do the serialization myself with Json.Net. The problem is that when I return a json string as opposed to an object, the json that is sent to the client is wrapped in quotes. Using DataContractJsonSerializer, returning a specific object type, the response is: {"Message":"Hello World"} Using Json.Net to return a json string, the response is: "{\"Message\":\"Hello World\"}" I do not want to have to eval() or JSON.parse() the result on the client, which is what I would have to do if the json comes back as a string, wrapped in quotes. I realize that the behavior is correct; it's just not what I want/need. I need the raw json; the behavior when the service method's return type is an object, not a string. So, how can I have my method return an object type, but not use the DataContractJsonSerializer? How can I tell it to use the Json.Net serializer instead? Or, is there someway to directly write to the response stream? So I can just return the raw json myself? Without the wrapping quotes? Here is my contrived example, for reference: [DataContract] public class SimpleMessage { [DataMember] public string Message { get; set; } } [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class PersonService { // uses DataContractJsonSerializer // returns {"Message":"Hello World"} [WebGet(UriTemplate = "helloObject")] public SimpleMessage SayHelloObject() { return new SimpleMessage("Hello World"); } // uses Json.Net serialization, to return a json string // returns "{\"Message\":\"Hello World\"}" [WebGet(UriTemplate = "helloString")] public string SayHelloString() { SimpleMessage message = new SimpleMessage() { Message = "Hello World" }; string json = JsonConvert.Serialize(message); return json; } // I need a mix of the two. Return an object type, but use the Json.Net serializer. }

    Read the article

  • How to get stack trace information for logging in production when using the GAC

    - by Jonathan Parker
    I would like to get stack trace (file name and line number) information for logging exceptions etc. in a production environment. The DLLs are installed in the GAC. Is there any way to do this? This article says about putting PDB files in the GAC: You can spot these easily because they will say you need to copy the debug symbols (.pdb file) to the GAC. In and of itself, that will not work. I know this article refers to debugging with VS but I thought it might apply to logging the stacktrace also. I've followed the instructions for the answer to this question except for unchecking Optimize code which they said was optional. I copied the dlls and pdbs into the GAC but I'm still not getting the stack trace information. Here's what I get in the log file for the stack trace: OnAuthenticate at offset 161 in file:line:column <filename unknown>:0:0 ValidateUser at offset 427 in file:line:column <filename unknown>:0:0 LogException at offset 218 in file:line:column <filename unknown>:0:0 I'm using NLog. My NLog layout is: layout="${date:format=s}|${level}|${callsite}|${identity}|${message}|${stacktrace:format=Raw}" ${stacktrace:format=Raw} being the relevant part.

    Read the article

  • How to retain XML string as a string field during XML deserialization

    - by detale
    I got an XML input string and want to deserialize it to an object which partially retain the raw XML. <SetProfile> <sessionId>A81D83BC-09A0-4E32-B440-0000033D7AAD</sessionId> <profileDataXml> <ArrayOfProfileItem> <ProfileItem> <Name>Pulse</Name> <Value>80</Value> </ProfileItem> <ProfileItem> <Name>BloodPresure</Name> <Value>120</Value> </ProfileItem> </ArrayOfProfileItem> </profileDataXml> </SetProfile> The class definition: public class SetProfile { public Guid sessionId; public string profileDataXml; } I hope the deserialization syntax looks like string inputXML = "..."; // the above XML XmlSerializer xs = new XmlSerializer(typeof(SetProfile)); using (TextReader reader = new StringReader(inputXML)) { SetProfile obj = (SetProfile)xs.Deserialize(reader); // use obj .... } but XMLSerializer will throw an exception and won't output < profileDataXml 's descendants to "profileDataXml" field in raw XML string. Is there any way to implement the deserialization like that?

    Read the article

  • "no block given" errors with cache_money

    - by emh
    i've inherited a site that in production is generating dozens of "no block given" exceptions every 5 minutes. the top of the stack trace is: vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:42:in `add' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:33:in `get' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `call' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `fetch' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:31:in `get' so it appears that the problem is in the cache money plugin. has anyone experienced something similar? i've cut and pasted the relevant code below -- anyone more familiar with blocks able to discern any obvious problems? 11 def fetch(keys, options = {}, &block) 12 case keys 13 when Array 14 keys = keys.collect { |key| cache_key(key) } 15 hits = repository.get_multi(keys) 16 if (missed_keys = keys - hits.keys).any? 17 missed_values = block.call(missed_keys) 18 hits.merge!(missed_keys.zip(Array(missed_values)).to_hash) 19 end 20 hits 21 else 22 repository.get(cache_key(keys), options[:raw]) || (block ? block.call : nil) 23 end 24 end 25 26 def get(keys, options = {}, &block) 27 case keys 28 when Array 29 fetch(keys, options, &block) 30 else 31 fetch(keys, options) do 32 if block_given? 33 add(keys, result = yield(keys), options) 34 result 35 end 36 end 37 end 38 end 39 40 def add(key, value, options = {}) 41 if repository.add(cache_key(key), value, options[:ttl] || 0, options[:raw]) == "NOT_STORED\r\n" 42 yield 43 end 44 end

    Read the article

  • NinePatchDrawable from Drawable.createFromPath or FileInputStream

    - by Tim H
    I cannot get a nine patch drawable to work (i.e. it is stretching the nine patch image) when loading from either Drawable.createFromPath or from a number of other methods using an InputStream. Loading the same nine patch works fine when loading from when resources. Here's the methods I am trying: Button b = (Button) findViewById(R.id.Button01); Drawable d = null; //From Resources (WORKS!) d = getResources().getDrawable(R.drawable.test); //From Raw Resources (Doesn't Work) InputStream is = getResources().openRawResource(R.raw.test); d = Drawable.createFromStream(is, null); //From Assets (Doesn't Work) try { InputStream is = getResources().getAssets().open("test.9.png"); d = Drawable.createFromStream(is, null); } catch (Exception e) {e.printStackTrace();} //From FileInputStream (Doesn't Work) try { FileInputStream is = openFileInput("test.9.png"); d = Drawable.createFromStream(is, null); } catch (Exception e) {e.printStackTrace();} //From File path (Doesn't Work) d = Drawable.createFromPath(getFilesDir() + "/test.9.png"); if (d != null) b.setBackgroundDrawable(d);

    Read the article

  • Cannot get a session with Facebook app? (using its Graph API)

    - by Jian Lin
    I have really simple few lines of Facebook app, using the new Facebook API: <pre> <?php require 'facebook.php'; // Create our Application instance. $facebook = new Facebook(array( 'appId' => '117676584930569', 'secret' => '**********', // hidden here on the post... 'cookie' => true, )); var_dump($facebook); ?> but it is giving me the following output: http://apps.facebook.com/woolaladev/i2.php would give out object(Facebook)#1 (6) { ["appId:protected"]=> string(15) "117676584930569" ["apiSecret:protected"]=> string(32) "**********" <--- just hidden on this post ["session:protected"]=> NULL <--- Session is NULL for some reason ["sessionLoaded:protected"]=> bool(false) ["cookieSupport:protected"]=> bool(true) ["baseDomain:protected"]=> string(0) "" } Session is NULL for some reason, but I am logged in and can access my home and profile and run other apps on Facebook (to see that I am logged on). I am following the sample on: http://github.com/facebook/php-sdk/blob/master/examples/example.php http://github.com/facebook/php-sdk/blob/master/src/facebook.php (download using raw URL: wget http://github.com/facebook/php-sdk/raw/master/src/facebook.php ) Trying on both hosting companies at dreamhost.com and netfirms.com, and the results are the same.

    Read the article

  • what is wrong in java AES decrypt function?

    - by rohit
    hi, i modified the code available on http://java.sun.com/developer/technicalArticles/Security/AES/AES_v1.html and made encrypt and decrypt methods in program. but i am getting BadpaddingException.. also the function is returning null.. why it is happing?? whats going wrong? please help me.. these are variables i am using: kgen = KeyGenerator.getInstance("AES"); kgen.init(128); raw = new byte[]{(byte)0x00,(byte)0x11,(byte)0x22,(byte)0x33,(byte)0x44,(byte)0x55,(byte)0x66,(byte)0x77,(byte)0x88,(byte)0x99,(byte)0xaa,(byte)0xbb,(byte)0xcc,(byte)0xdd,(byte)0xee,(byte)0xff}; skeySpec = new SecretKeySpec(raw, "AES"); cipher = Cipher.getInstance("AES"); plainText=null; cipherText=null; following is decrypt function.. public String decrypt(String cipherText) { try { cipher.init(Cipher.DECRYPT_MODE, skeySpec); byte[] original = cipher.doFinal(cipherText.getBytes()); plainText = new String(original); } catch(BadPaddingException e) { } return plainText; }

    Read the article

  • setInterval alternative

    - by spyder
    Hi folks, In my app I am polling the webserver for messages every second and displaying them in the frontend. I use setInterval to achieve this. However as long as the user stays on that page the client keeps polling the server with requests even if there is no data. The server does give an indication when no more messages are being generated by setting a variable. I thought of using this variable to clearInterval and stop the timer but that didn't work. What else can I use in this situation? I am using jquery and django. Here is my code: jquery: var refresh = setInterval( function () { var toLoad = '/myMonitor'+' #content'; $('#content').load(toLoad).show(); }, 1000); // refresh every 1000 milliseconds }); html: div id=content is here I can access the django variable for completion in html with each refresh. How can I set clearInterval if at all ? Note: stack overflow does not let me put is &gt &lt so html is incomplete Thanks

    Read the article

  • Having trouble coming up with a good architecture for a client/server application

    - by rmw1985
    I am writing a remote backup service meant to support 1000+ users. It is going to use librsync to store reverse diffs (like rdiff-backup) and make data transfer efficient. My trouble is that I do not know the "best" way to implement the client/server model. I have thought of doing it like rsync/rdiff-backup do it by having the client open an SSH connection and running a server executable and communicating across pipes. Another alternative would be to write a server which would handle authentication and communicate with the client via SSL. The reason I have thought of this is that there is "state" information like how many backup jobs are setup, etc. that must be maintained. Another alternative that I have thought about is running a "web service" using Pylons or Django to handle the authentication, but I do not know how to bridge that the the "storage" side. Since I am using librsync, I cannot use "dumb" storage. Is there a way to pipe data through Pylons or Django to a server side handler that would do the rsync calculation? This seems to me like maybe a dumb question but I am sort of lost. Any tips or suggestions from more experienced developers would be extremely helpful.

    Read the article

  • Java: Cipher package (encrypt and decrypt). invalid key error

    - by noinflection
    Hello folks, i am doing a class with static methods to encrypt and decrypt a message using javax.crypto. I have 2 static methods that use ecipher and dcipher in order to do what they are supossed to do i need to initialize some variables (which are static also). But when i try to use it i get InvalidKeyException with the parameters i give to ecipher.init(...). I can't find why. Here is the code: private static byte[] raw = {-31, 17, 7, -34, 59, -61, -60, -16, 26, 87, -35, 114, 0, -53, 99, -116, -82, -122, 68, 47, -3, -17, -21, -82, -50, 126, 119, -106, -119, -5, 109, 98}; private static SecretKeySpec skeySpec; private static Cipher ecipher; private static Cipher dcipher; static { try { skeySpec = new SecretKeySpec(raw, "AES"); // Instantiate the cipher ecipher = Cipher.getInstance("AES"); dcipher = Cipher.getInstance("AES"); ecipher.init(Cipher.ENCRYPT_MODE, skeySpec); dcipher.init(Cipher.DECRYPT_MODE, skeySpec); } catch (NoSuchAlgorithmException e) { throw new UnhandledException("No existe el algoritmo deseado", e); } catch (NoSuchPaddingException e) { throw new UnhandledException("No existe el padding deseado", e); } catch (InvalidKeyException e) { throw new UnhandledException("Clave invalida", e); } }

    Read the article

  • R plot- SGAM plot counts vs. time - how do I get dates on the x-axis?

    - by Nate
    I'd like to plot this vs. time, with the actual dates (years actually, 1997,1998...2010). The dates are in a raw format, ala SAS, days since 1960 (hence as.date conversion). If I convert the dates using as.date to variable x, and do the GAM plot, I get an error. It works fine with the raw day numbers. But I want the plot to display the years (data are not equally spaced). structure(list(site = c(928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L), date = c(13493L, 13534L, 13566L, 13611L, 13723L, 13752L, 13804L, 13837L, 13927L, 14028L, 14082L, 14122L, 14150L, 14182L, 14199L, 16198L, 16279L, 16607L, 16945L, 17545L, 17650L, 17743L, 17868L, 17941L, 18017L, 18092L), y = c(7L, 7L, 17L, 18L, 17L, 17L, 10L, 3L, 17L, 24L, 11L, 5L, 5L, 3L, 5L, 14L, 2L, 9L, 9L, 4L, 7L, 6L, 1L, 0L, 5L, 0L)), .Names = c("site", "date", "y"), class = "data.frame", row.names = c(NA, -26L)) sgam1 <- gam(sites$y ~ s(sites$date)) sgam <- predict(sgam1, se=TRUE) plot(sites$date,sites$y,xaxt="n", xlab='Time', ylab='Counts') x<-as.Date(sites$date, origin="1960-01-01") axis(1, at=1:26,labels=x) lines(sites$date,sgam$fit, lty = 1) lines(sites$date,sgam$fit + 1.96* sgam$se, lty = 2) lines(sites$date,sgam$fit - 1.96* sgam$se, lty = 2) ggplot2 has a solution (it doesn't mind the as.date thing) but it gives me other problems...

    Read the article

  • mod_cgi , mod_fastcgi, mod_scgi , mod_wsgi, mod_python, FLUP. I don't know how many more. what is mo

    - by claws
    I recently learnt Python. I liked it. I just wanted to use it for web development. This thought caused all the troubles. But I like these troubles :) Coming from PHP world where there is only one way standardized. I expected the same and searched for python & apache. http://stackoverflow.com/questions/449055/setting-up-python-on-windows-apache says Stay away from mod_python. One common misleading idea is that mod_python is like mod_php, but for python. That is not true. So what is equivalent of mod_php in python? I need little clarification on this one http://stackoverflow.com/questions/219110/how-python-web-frameworks-wsgi-and-cgi-fit-together CGI, FastCGI and SCGI are language agnostic. You can write CGI scripts in Perl, Python, C, bash, or even Assembly :). So, I guess mod_cgi , mod_fastcgi, mod_scgi are their corresponding apache modules. Right? WSGI is some kind of optimized/improved inshort an efficient version specifically designed for python language only. In order to use this mod_wsgi is a way to go. right? This leaves out mod_python. What is it then? Apache - mod_fastcgi - FLUP (via CGI protocol) - Django (via WSGI protocol) Flup is another way to run with wsgi for any webserver that can speak FCGI, SCGI or AJP What is FLUP? What is AJP? How did Django come in the picture? These questions raise quetions about PHP. How is it actually running? What technology is it using? mod_php & mod_python what are the differences? In future if I want to use Perl or Java then again will I have to get confused? Kindly can someone explain things clearly and give a Complete Picture.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >