Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 78/284 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Can't install xclip on Ubuntu 10.10

    - by wildster
    I'm trying to load an SSH key to Github from a new machine and this command is not working: sudo apt-get install xclip Reading package lists... Done Building dependency tree Reading state information... Done Package xclip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package xclip has no installation candidate when I try: sudo aptitude install xclip Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done No candidate version found for xclip No candidate version found for xclip The following partially installed packages will be configured: synaptics-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Any idea how I can install this? Mucho thanks in advance

    Read the article

  • Can't install xclip on Ubuntu 10.10

    - by wildster
    I'm trying to load an SSH key to Github from a new machine and this command is not working: sudo apt-get install xclip Reading package lists... Done Building dependency tree Reading state information... Done Package xclip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package xclip has no installation candidate when I try: sudo aptitude install xclip Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done No candidate version found for xclip No candidate version found for xclip The following partially installed packages will be configured: synaptics-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Any idea how I can install this? Mucho thanks in advance

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

  • BizTalk: Sample: Context routing and Throttling with orchestration

    - by Leonid Ganeline
    The sample demonstrates using orchestration for throttling and using context routing. Usually throttling is implemented on the host level (in BizTalk 2010 we can also using the host instance level throttling). Here is demonstrated the throttling with orchestration convoy that slows down message flow from some customers. Sample implements sort of quality service agreement layer for different kind of customers. The sample demonstrates the context routing between orchestrations. It has several advantages over the content routing. For example, we don’t have to create the property schema and promote properties on the schemas; we don’t have to change the message content to change routing. Use case:  The BizTalk application has a main processing orchestration that process all input messages. The application usually works as an OLTP application. Input messages came in random order without peaks, typical scenario for the on-line users. But sometimes the big data batch payloads come. These batches overload processing orchestrations. All processes, activated by on-line users after the payload, come to the same queue and are processed only after the payload. Result is on-line users can see significant delay in processing. It can be minutes or hours, depending of the batch size. Requirements: On-line user’s processing should work without delays. Big batches cannot disturb on-line users. There should be higher priority for the on-line users and the lower priority for the batches. Design: Decision is to divide the message flow in two branches, one for on-line users and second for batches. Branch with batches provides messages to the processing line with low priority, and the on-line user’s branch – with high priority. All messages are provided by hi-speed receive port. BTS.ReceivePortName context property is used for routing. The Router orchestration separates messages sent from on-line users and from the batch messages. But the Router does not use the BizTalk provided value of this property, the Router set up this value by itself. Router uses the content of the messages to decide if it is from on-line users or from batches. The message context property the BTS.ReceivePortName is changed respectively, its value works as a recipient address, as the “To” address for the next recipient orchestrations. Those next orchestrations are the BatchBottleneck and the MainProcess orchestrations. Messages with context equal “ToBatch” are filtered up by the BatchBottleneck orchestration. It is a unified convoy orchestration and it throttles the message flow, delaying the message delivery to the MainProcess orchestration. The BatchBottleneck orchestration changes the message context to the “ToProcess” and sends messages one after another with small delay in between. Delay can be configured in the BizTalk config file as:                 <appSettings>                                 <add key="GLD_Tests_TwoWayRouting_BatchBottleneck_DelayMillisec" value="100"/>                 </appSettings>   Of course, messages with context equal “ToProcess” are filtered up by the MainProcess orchestration.   NOTES: Filters with string values: In Orchestrations (the first Receive shape in orchestration) use string values WITH quotes; in Send Ports use string values WITHOUT quotes. Filters on the Send Ports are dynamic; we can change them in run-time. Filters on the Orchestrations are static; we can change them only in design-time. To check the existence of the promoted property inside orchestration use the Expression shape with construction like this:       if (BTS.ReceivePortName exists myMessage) { …; } It is not possible in the Message Assignment shape because using the “if” statement inside Message Assignment is prohibited. Several predefined context properties can behave in specific way. Say MessageTracking.OriginatingMessage or XMLNORM.DocumentSpecName, they are required some internal rules should be applied to the format or usage of this properties. MessageTracking.* parameters require you have to use tracking and you can get unexpected run-time errors in some cases. My recommendation is - use very limited set of the predefined context properties. To “attach” the new promoted property to the message, we have to use correlation. The correlation type should include this property. [Here is a good explanation by Saravana ] The sample code is here [sorry, temporary trubles with CodePlex].

    Read the article

  • dynamic multiple instance of swfupload (firefox vs IE)

    - by jean27
    We have this dynamic uploader which creates a new instance of swfupload. I'm a little bit confused with the outputs produced by firefox and ie. Firefox have the same output as chrome, safari and opera. Whenever I clicked a button for adding a new instance, the previous instances of swfupload in firefox refresh while IE don't. I have this debug information: For Firefox: SWF DEBUG OUTPUT IN FIREFOX ---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_0 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512466022 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-1_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_1 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512476357 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-2_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_1 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_1_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1SWF DEBUG: StartUpload: First file in queueSWF DEBUG: StartUpload(): No files found in the queue.SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_1_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_1_0. Bytes: 30218. Total: 30218SWF DEBUG: Event: uploadSuccess: File ID: SWFUpload_1_0 Response Received: true Data: 65-AddClassification.pngSWF DEBUG: Event: uploadComplete : Upload cycle complete. For IE: SWF DEBUG OUTPUT IN IE ---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_0 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512200531 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-1_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: Browse... button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_1 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512222093 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-2_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: Browse... button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_1 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)SWF DEBUG: ExternalInterface reinitializedSWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_1_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_1_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_0_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 29151. Total: 29151SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_1_0. Bytes: Total: 30218SWF DEBUG: Event: uploadSuccess: File ID: SWFUpload_0_0 Response Received: true Data: 62-Greenwich_-_Branches.pngSWF DEBUG: Event: uploadComplete : Upload cycle complete.

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Autologin 2 Windows users OR Login another user from the desktop

    - by fpdragon
    I'm using two windows users on my HTPC at the same time. One is just for watching videos and one for administration via remote. This setup is quite ideal for me since windows can handle multiple concurrent logins and the win "rdp concurrent hack" (Google). The problem is, I want both users to be logged in automatically when the pc was started. It shall be possible to watch tv and also the admin user shall be automatically logged in to start my scripts and other tasks, even if I haven't logged in via remote desktop manually. Later, when I want to admin my htpc I can just rdp connect the admin user without interrupting the video playback on the actual HTPC's screen and check my cleanup tasks, downloads, ... witch already executed for this admin user. But right now I found no solution to automatically login user A from a user B desktop and I also found no solution to autologin both users immediately at startup. As a workaround I have to fire up my other notebook machine and login one time with the remote user via rdp. From this time on the remote admin user is running concurrent with the main user in the background of the machine. The other workaround would be... after startup switch user from main user to admin user and then back again. But that also requires manual steps. I'm on a Windows 8 System right now but all infos for Win7 or XP would be also interesting. thanks a lot for all ideas. PS: just to prevent useless posts... don't tell me that only one user can be logged in to windows. ;)

    Read the article

  • How to implement a virtual server running Ubuntu inside a fileserver in windows?

    - by user541445
    I work in a company that has some limitations regarding their budget. They need client/server aplication. I can code the aplication, I've made mini tests on primordial applications that work. The thing is that they only have fileservers, and the application they need must be concurrent compliant, so the database must be in their local network (Fileserver is the only choice). So far, I have explored almost any option available, starting with: Desktop databases. Access (We have a license)(But concurreny is just not effective, besides it's a windows software, yuck). Sqlite (Nice, but since the information they manage is a lot, I've performed some concurrency tests with INSERTs and doing SELECTS at the same time). It fails, somehow it just stops inserting. Open office Base (I dismember a base office only to see that it was a file mode HSQLDB). I've tried, not concurrent at all. Etc (You name the Open source Desktop Database manager, and yes, I've tried that one.) Server databases Call me a stubborn person, but reading some server databases documentation say that their databases will work in a file mode. I've tried a lot of products. Postgres, MySQL, FireBird, H2, Derby, Oracle Express, IBM DB2 Express, etc. So, I really need a hand here, I've been doing this install/delete/depression thing with a lot of databases for 3 weeks, until I got with a crazy idea and I just came here to express it. So, my question comes down to this: Installing a light virtual OS software like Virtual PC and then installing in there a Server OS like an Ubuntu server inside that virtual software will work? Will it work 24/7 or when I close the virtual pc software? Will this work in a fileserver? Any suggestion, answer, critic to the place I work, crazy new concurrent database that will work in fileserver will be most than welcome.

    Read the article

  • Remote Desktop Services Licensing - Does server have to have a RDS role?

    - by transistor1
    I recently set up a "micro" size Windows 2008 Datacenter server on Amazon AWS. My small group needs several concurrent RDS users to be able to access the machine. Without installing the "Remote Desktop Server" role, it allows 2 concurrent connections. I read on MS' website that in order to set up multiple users, we needed to install the RDS role. I did so, but now the application we are trying to share is running much slower than it was before. Prior to the role installation, it was taking about 5 seconds to open; now it is taking a few minutes to open -- without any other users logged on except me. My assumption is that the RDS role may be too much for this micro instance to handle, and currently, changing to another size instance is not an option (it may be possible later if we were to receive enough funding). This leads me to the following questions: 1) Is it a sensible assessment to assume that it is the RDS role is slowing things down, or are there other things that I could look at to speed it up? We are talking about a machine with ~600MB of memory. 2) If I revert back to the pre-RDS role, is there any legitimate way (in terms of purchasing RDS licenses) to get more than 2 concurrent desktops? I did read this, and am not questioning that the answerer is knowlegeable; but someone else may have some other experience. I am also making it clear that we want to do this in a legitimate way. Thanks in advance for any assistance that can be provided! EDIT: if it is helpful in answering the question, the application in question is a Lotus Approach database. Also, I am asking this from a technical perspective: not a legal one. I want to know if it is possible to install valid licenses without the RDS role.

    Read the article

  • GooglePlayServicesNotAvailable: GooglePlayServices not available due to error 1

    - by Mathias Lin
    I'm on Galaxy S III with Android 4.0.4, Google Play installed. In my app I try to get a token from the Google Play services, as described on https://developers.google.com/android/google-play-services/authentication. Since it's all quite new (the Google pages were last updated this week), there's not much documentation to be found, especially about each specific error code. final String token = GoogleAuthUtil.getToken(this, "[email protected]", "scope"); gives me an exception: 09-30 11:24:36.075: ERROR/GoogleAuthUtil(11984): GooglePlayServices not available due to error 1 09-30 11:24:36.105: ERROR/AuthTokenCheck_(11984): Error 1 com.google.android.gms.auth.GooglePlayServicesAvailabilityException: GooglePlayServicesNotAvailable at com.google.android.gms.auth.GoogleAuthUtil.f(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at mobi.app.activity.AuthTokenCheck.getAndUseAuthTokenBlocking(AuthTokenCheck.java:148) at mobi.app.activity.AuthTokenCheck$1.doInBackground(AuthTokenCheck.java:61) at android.os.AsyncTask$2.call(AsyncTask.java:264) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) at java.util.concurrent.FutureTask.run(FutureTask.java:137) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:208) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569) at java.lang.Thread.run(Thread.java:856) https://developers.google.com/android/google-play-services/reference/com/google/android/gms/auth/package-summary tells me: GooglePlayServicesAvailabilityExceptions are special instances of UserRecoverableAuthExceptions which are thrown when the expected Google Play services app is not available for some reason. But what exactly does that mean? And how to resolve it? I've added the Google Play services extras in my SDK and the jar to my project, marked as 'exported'. I'm also wondering what the "Google Play services app" exactly is. Unfortunately it's all not very clearly described at https://developers.google.com/android/google-play-services/. The Google Play services component is delivered as an APK through the Google Play Store, so updates to Google Play services are not dependent on carrier or OEM system image updates. Newer devices will also have Google Play services as part of the device's system image, but updates are still pushed to these newer devices through the Google Play Store. Isn't "Google Play services" app the same as the "Google Play" app? Another question I have, due to lack of documentation: what is the scope parameter for? The documentation just says the following, but not defining what an 'authentication scope' exactly is: scope String representing the authentication scope.

    Read the article

  • Strange Play Framework 2.2 exceptions after trying to add MySQL / slick

    - by Mike Cialowicz
    I'm working on a Play 2.2 application, and things have gone a bit south on me since I've tried adding my DB layer. Below are my build.sbt dependencies. As you can see I use mysql-connector-java and play-slick: libraryDependencies ++= Seq( jdbc, anorm, cache, "joda-time" % "joda-time" % "2.3", "mysql" % "mysql-connector-java" % "5.1.26", "com.typesafe.play" %% "play-slick" % "0.5.0.8", "com.aetrion.flickr" % "flickrapi" % "1.1" ) My application.conf has some similarly simple DB stuff in it: db.default.url="jdbc:mysql://localhost/myDb" db.default.driver="com.mysql.jdbc.Driver" db.default.user="root" db.default.pass="" This is what it looks like when my Play server starts: [info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000 (Server started, use Ctrl+D to stop and go back to the console...) [info] Compiling 1 Scala source to C:\bbq\cats\in\space [info] play - database [default] connected at jdbc:mysql://localhost/myDb [info] play - Application started (Dev) So, it appears that Play can connect to the MySQL DB just fine (I think). However, I get this exception when I make any request to my server: [error] p.nettyException - Exception caught in Netty java.lang.NoSuchMethodError: akka.actor.ActorSystem.dispatcher()Lscala/concurren t/ExecutionContext; at play.core.Invoker$.<init>(Invoker.scala:24) ~[play_2.10.jar:2.2.0] at play.core.Invoker$.<clinit>(Invoker.scala) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$Implicits$.defaultContext$lzycompu te(Execution.scala:7) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$Implicits$.defaultContext(Executio n.scala:6) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$.<init>(Execution.scala:10) ~[play _2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$.<clinit>(Execution.scala) ~[play_ 2.10.jar:2.2.0] The odd thing is that the 2nd request (to the exact same URL, same controller, no changes) comes back with a different error: [error] p.nettyException - Exception caught in Netty java.lang.NoClassDefFoundError: Could not initialize class play.api.libs.concurr ent.Execution$ at play.core.server.netty.PlayDefaultUpstreamHandler.handleAction$1(Play DefaultUpstreamHandler.scala:194) ~[play_2.10.jar:2.2.0] at play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(Pla yDefaultUpstreamHandler.scala:169) ~[play_2.10.jar:2.2.0] at com.typesafe.netty.http.pipelining.HttpPipeliningHandler.messageRecei ved(HttpPipeliningHandler.java:62) ~[netty-http-pipelining.jar:na] at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived (HttpContentDecoder.java:108) ~[netty-3.6.5.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:29 6) ~[netty-3.6.5.Final.jar:na] at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessage Received(FrameDecoder.java:459) ~[netty-3.6.5.Final.jar:na] The URL / controller that I'm requesting just renders a static web page and doesn't do anything of any significance. It was working just fine before I started adding my DB layer. I'm rather stuck. Any help would be greatly appreciated, thanks. I'm using Scala 2.10.2, Play 2.2.0, and MySQL Server 5.6.14.0 (community edition).

    Read the article

  • Chef-solo cannot locate an nginx recipe template

    - by crftr
    I have been recently experimenting with Chef. I thought I would attempt to rebuild my personal web server using chef-solo. It's an AWS instance running the Amazon 64bit Linux AMI. My first objective is to install nginx. I have cloned the Opscode cookbook repository, and am using their nginx cookbook. My problem appears to be that chef-solo cannot find a template after it has started the process. The command I'm using is chef-solo -j /etc/chef/dna.json dna.json { "nginx": { "user": "ec2-user" }, "recipes": [ "nginx" ] } solo.rb file_cache_path "/var/chef-solo" cookbook_path "/var/chef-solo/cookbooks" ...the output [root@ip-10-202-221-135 chef-solo]# chef-solo -j /etc/chef/dna.json /usr/lib64/ruby/gems/1.9.1/gems/systemu-2.2.0/lib/systemu.rb:29: Use RbConfig instead of obsolete and deprecated Config. [Fri, 27 Jan 2012 19:41:36 +0000] INFO: *** Chef 0.10.8 *** [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Setting the run_list to ["nginx"] from JSON [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List is [recipe[nginx]] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List expands to [nginx] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Starting Chef Run for ip-10-202-221-135.ec2.internal [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Running start handlers [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Start handlers complete. [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Missing gem 'mysql' [Fri, 27 Jan 2012 19:41:38 +0000] INFO: Processing package[nginx] action install (nginx::default line 21) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing directory[/var/log/nginx] action create (nginx::default line 23) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxensite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxdissite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[nginx.conf] action create (nginx::default line 38) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/etc/nginx/sites-available/default] action create (nginx::default line 46) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: template[/etc/nginx/sites-available/default] mode changed to 644 [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (nginx::default line 46) has had an error [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (/var/chef-solo/cookbooks/nginx/recipes/default.rb:46:in `from_file') had an error: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `rename' /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `block in mv' /usr/lib64/ruby/1.9.1/fileutils.rb:1515:in `block in fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:1531:in `fu_each_src_dest0' /usr/lib64/ruby/1.9.1/fileutils.rb:1513:in `fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:508:in `mv' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:47:in `block in action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:48:in `block in render_template' /usr/lib64/ruby/1.9.1/tempfile.rb:316:in `open' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:45:in `render_template' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:99:in `render_with_context' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:39:in `action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource.rb:440:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:45:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block (2 levels) in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `each' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:94:in `block in execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call_iterator_block' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:85:in `step' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:104:in `iterate' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:55:in `each_with_index' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:92:in `execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:76:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:312:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:160:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:192:in `block in run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `loop' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application.rb:67:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/bin/chef-solo:25:in `<top (required)>' /usr/bin/chef-solo:19:in `load' /usr/bin/chef-solo:19:in `<main>' [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Running exception handlers [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Exception handlers complete [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Stacktrace dumped to /var/chef-solo/chef-stacktrace.out [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Errno::ENOENT: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) What am I doing incorrectly?

    Read the article

  • Regarding Reinstalling PostgreSQL

    - by Vivalavista
    I was using PostgreSQL 8.4. I tried removing it through Synaptic Manager and then I tried to install 9.1, but I still version 8.4. I deleted all the files associated with postgresql. Now I am unable to install any version of PostgreSQL. When I try I get this error: Setting up postgresql-9.1 (9.1.3-1~lucid) ... .: 12: Can't open /usr/share/postgresql-common/maintscripts-functions dpkg: error processing postgresql-9.1 (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of postgresql: postgresql depends on postgresql-9.1; however: Package postgresql-9.1 is not configured yet. dpkg: error processing postgresql (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: postgresql-9.1 postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) Please tell me the way to remove postgres completely so I can install a fresh version.

    Read the article

  • Can Acrobat 11 be made to do OCR using multiple CPU cores?

    - by tarcman.
    OCR processing takes time. Using multiple CPU cores would speed up processing. Acrobat 10 was not a multithreaded application. How about Acrobat 11? Does 11 by default do OCR using multiple CPU cores (if available)? If not, are there any workarounds, e.g. scripting, to help make Acrobat 11 do OCR using multiple CPU cores? Either through Acrobat's built in scripting language or using external scripts that launch and direct multiple single thread instances of Acrobat to in parallell to parts of the processing job. Note: This question is not too localized (not limited to a specific moment in time) because (1) Adobe does not release new major Acrobat versions very often (Acrobat 10 was released two years ago) and (2) Adobe Acrobat is a widely used application.

    Read the article

  • Dedicated GPU in Dell PowerEdge C1100

    - by Eli Gundry
    We recently purchased a Dell PowerEdge C1100 off lease with the initention of using it for graphics processing. We installed an AMD HD 7000 series GPU in it that runs off of board power and it sends video to the display. That said, the video is very choppy, leading us to belive that the onboard video is doing all the processing and sending it to the card. Is there any way to either disable the VGA on this server or tell the OS to only use the dedicated card. More info: The server is running REHL 6.4 The graphics running the proprietary AMD drivers The video card only works in OS and does not show the BIOS on boot (we know that it's impossible to change this) Any ideas, guys? Update We are now thinking that the GPU is doing the graphics processing, but not working at the full speed of the PCI bus. Which is odd, because it is an x16 slot, but probably optimized to use a RAID card (if that makes any sense). Is there any way to remedy the choppy graphics on this server?

    Read the article

  • Strange error in SpringMVC Application Startup

    - by Euzel Villanueva
    I'm getting a very strange stack trace when trying to load a SpringMVC application and at a lost to why this is occurring. org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter#0': Cannot create inner bean '(inner bean)' of type [org.springframework.http.converter.xml.XmlAwareFormHttpMessageConverter] while setting bean property 'messageConverters' with key [4]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#6': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.springframework.http.converter.xml.XmlAwareFormHttpMessageConverter]: Constructor threw exception; nested exception is java.lang.OutOfMemoryError: Java heap space at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:281) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:125) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:353) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:153) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1325) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1086) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:442) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:458) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:339) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:306) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:127) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1133) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1087) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:996) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4834) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5155) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5150) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#6': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.springframework.http.converter.xml.XmlAwareFormHttpMessageConverter]: Constructor threw exception; nested exception is java.lang.OutOfMemoryError: Java heap space at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:965) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:911) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:270) ... 31 more Caused by: org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.springframework.http.converter.xml.XmlAwareFormHttpMessageConverter]: Constructor threw exception; nested exception is java.lang.OutOfMemoryError: Java heap space at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:141) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:74) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:958) ... 35 more

    Read the article

  • Application Engineering and Number of Users

    - by Kramii
    Apart from performance concerns, should web-based applications be built differently according to the number of (concurrent) users? If so, what are the main differences for (say) 4, 40, 400 and 4000 users? I'm particularly interested in how logging, error handling, design patterns etc. would be be used according to the number of concurrent users.

    Read the article

  • Android - doInBackground() error in AsyncTask

    - by AimanB
    What my app here basically does is it captures a photo or import from gallery, and when the Upload button is pressed, the image will be uploaded to a localhost server. Before I implemented AsyncTask into the process, it doesn't have any problem uploading whatsoever. Now that I've put AsyncTask, everything went wrong. I don't know which part that I do wrong in this phase. This is what logcat shows when I try to upload an image file: 10-28 17:23:25.989: E/AndroidRuntime(3356): FATAL EXCEPTION: AsyncTask #5 10-28 17:23:25.989: E/AndroidRuntime(3356): java.lang.RuntimeException: An error occured while executing doInBackground() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$3.done(AsyncTask.java:299) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.setException(FutureTask.java:219) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:239) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.lang.Thread.run(Thread.java:856) 10-28 17:23:25.989: E/AndroidRuntime(3356): Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:197) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:111) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast$TN.<init>(Toast.java:324) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.<init>(Toast.java:91) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.makeText(Toast.java:238) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:268) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:1) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$2.call(AsyncTask.java:287) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:234) This is my code for the Upload activity: public class UploadImageActivity extends Activity implements OnItemSelectedListener { InputStream inputStream; private ImageView imageView; String the_string_response; private static final int SELECT_PICTURE = 0; private static final int CAMERA_REQUEST = 1888; private static final String SERVER_UPLOAD_URI = "...myserver.php"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_upload_image); imageView = (ImageView) findViewById(R.id.imgUpload); } public void capturePhoto(View view) { Intent cameraIntent = new Intent( android.provider.MediaStore.ACTION_IMAGE_CAPTURE); File f = new File(android.os.Environment.getExternalStorageDirectory(), "temp.jpg"); cameraIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(f)); startActivityForResult(cameraIntent, CAMERA_REQUEST); } public void pickPhoto(View view) { // TODO: launch the photo picker Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser(intent, "Select Picture"), SELECT_PICTURE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { File f = new File(Environment.getExternalStorageDirectory() .toString()); for (File temp : f.listFiles()) { if (temp.getName().equals("temp.jpg")) { f = temp; break; } } try { BitmapFactory.Options bitmapOptions = new BitmapFactory.Options(); Bitmap bitmap = BitmapFactory.decodeFile(f.getAbsolutePath(), bitmapOptions); imageView.setImageBitmap(bitmap); String path = android.os.Environment .getExternalStorageDirectory() + File.separator + "Phoenix" + File.separator + "default"; f.delete(); OutputStream outFile = null; File file = new File(path, String.valueOf(System .currentTimeMillis()) + ".jpg"); try { outFile = new FileOutputStream(file); bitmap.compress(Bitmap.CompressFormat.JPEG, 85, outFile); outFile.flush(); outFile.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } catch (Exception e) { e.printStackTrace(); } } if (requestCode == SELECT_PICTURE && resultCode == RESULT_OK) { Bitmap bitmap = getPath(data.getData()); imageView.setImageBitmap(bitmap); } } private Bitmap getPath(Uri uri) { String[] projection = { MediaStore.Images.Media.DATA }; Cursor cursor = getContentResolver().query(uri, projection, null, null, null); int column_index = cursor.getColumnIndexOrThrow(projection[0]); cursor.moveToFirst(); String filePath = cursor.getString(column_index); cursor.close(); // Convert file path into bitmap image using below line. Bitmap bitmap = BitmapFactory.decodeFile(filePath); return bitmap; } public void uploadPhoto(View view) { try { executeMultipartPost(); } catch (Exception e) { e.printStackTrace(); } } public void executeMultipartPost() throws Exception { class execMultiPostAsync extends AsyncTask<String, Void, String>{ @Override protected String doInBackground(String... params){ // Choose image here BitmapDrawable drawable = (BitmapDrawable) imageView.getDrawable(); Bitmap bitmap = drawable.getBitmap(); ByteArrayOutputStream stream = new ByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream); // compress to // which // format // you want. byte[] byte_arr = stream.toByteArray(); String image_str = Base64.encodeBytes(byte_arr); ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("image", image_str)); try { HttpClient httpclient = new DefaultHttpClient(); /* * HttpPost(parameter): Server URI */ HttpPost httppost = new HttpPost(SERVER_UPLOAD_URI); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); the_string_response = convertResponseToString(response); } catch (Exception e) { Toast.makeText(UploadImageActivity.this, "ERROR " + e.getMessage(), Toast.LENGTH_LONG).show(); System.out.println("Error in http connection " + e.toString()); } return the_string_response; } @Override protected void onPostExecute(String result) { super.onPostExecute(result); Toast.makeText(UploadImageActivity.this, "Response " + result, Toast.LENGTH_LONG) .show(); } public String convertResponseToString(HttpResponse response) throws IllegalStateException, IOException { String res = ""; StringBuffer buffer = new StringBuffer(); inputStream = response.getEntity().getContent(); int contentLength = (int) response.getEntity().getContentLength(); // getting // content // lengt Toast.makeText(UploadImageActivity.this, "contentLength : " + contentLength, Toast.LENGTH_LONG).show(); if (contentLength < 0) { } else { byte[] data = new byte[512]; int len = 0; try { while (-1 != (len = inputStream.read(data))) { buffer.append(new String(data, 0, len)); // converting to // string and // appending to // stringbuffer } } catch (IOException e) { e.printStackTrace(); } try { inputStream.close(); // closing the stream } catch (IOException e) { e.printStackTrace(); } res = buffer.toString(); // converting stringbuffer to string Toast.makeText(UploadImageActivity.this, "Result : " + res, Toast.LENGTH_LONG).show(); // System.out.println("Response => " + // EntityUtils.toString(response.getEntity())); } return res; } } execMultiPostAsync exec = new execMultiPostAsync(); exec.execute(); } } Can someone please check if I put the AsyncTask task correctly in this activity? I think I've made a mistake somewhere.

    Read the article

  • difference between success and failed event in auditd/aureport

    - by user112358132134
    The aureport command has two options that limit the list of displayed events to those that were successful and those that failed. Per the man page: --failed Only select failed events for processing in the reports. The default is both success and failed events. --success Only select successful events for processing in the reports. The default is both success and failed events. What does this mean? Is the failure/success with regard to the actual event (e.g., a syscall that returned non-zero) or does the failure/success apply to auditd and whether or not there was an issue in processing the event?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >