Search Results

Search found 2092 results on 84 pages for 'rss feeds'.

Page 78/84 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Can you help me fix my broken packages?

    - by Andreas Hartmann
    I would like to upgrade from 13.04 to 13.10, but some broken packages are preventing upgrade success: grep Broken /var/log/dist-upgrade/apt.log output: Broken libwayland-client0:amd64 Conflicts on libwayland0 [ amd64 ] < 1.0.5-0ubuntu1 > ( libs ) (< 1.1.0) Broken libunity9:amd64 Breaks on unity-common [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 > ( gnome ) (< 7.1.2) Broken cups-filters:amd64 Conflicts on ghostscript-cups [ amd64 ] < 9.07~dfsg2-0ubuntu3.1 > ( text ) Broken libpam-systemd:amd64 Conflicts on libpam-xdg-support [ amd64 ] < 0.2-0ubuntu2 > ( admin ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ amd64 ] < 0.9.13-1 > ( libs ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ i386 ] < 0.9.13-1 > ( libs ) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ amd64 ] < 6.90.2daily13.04.05-0ubuntu1 > ( gnome ) (< 7.0.7) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ i386 ] < none > ( none ) (< 7.0.7) Broken libaccount-plugin-generic-oauth:amd64 Conflicts on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libaccount-plugin-generic-oauth:amd64 Breaks on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libmutter0b:amd64 Breaks on libmutter0a [ amd64 ] < 3.6.3-0ubuntu2 > ( libs ) Broken python3-aptdaemon.pkcompat:amd64 Breaks on libpackagekit-glib2-14 [ amd64 ] < 0.7.6-3ubuntu1 > ( libs ) (<= 0.7.6-4) Broken apache2:amd64 Conflicts on apache2.2-common [ amd64 ] < 2.2.22-6ubuntu5.1 > ( httpd ) Broken chromium-codecs-ffmpeg-extra:amd64 Conflicts on chromium-codecs-ffmpeg [ amd64 ] < 28.0.1500.71-0ubuntu1.13.04.1 -> 29.0.1547.65-0ubuntu2 > ( universe/web ) Broken unity-scope-home:amd64 Conflicts on unity-lens-shopping [ amd64 ] < 6.8.0daily13.03.04-0ubuntu1 > ( gnome ) Broken libsnmp30:amd64 Breaks on libsnmp15 [ amd64 ] < 5.4.3~dfsg-2.7ubuntu1 > ( libs ) Broken apache2.2-bin:amd64 Breaks on gnome-user-share [ amd64 ] < 3.0.4-0ubuntu1 > ( gnome ) (< 3.8.0-2~) Broken libgjs0d:amd64 Conflicts on libgjs0c [ amd64 ] < 1.34.0-0ubuntu1 > ( libs ) Broken unity-gtk2-module:amd64 Conflicts on appmenu-gtk [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken lib32asound2:amd64 Depends on libasound2 [ amd64 ] < 1.0.25-4ubuntu3.1 -> 1.0.27.2-1ubuntu6 > ( libs ) (= 1.0.25-4ubuntu3.1) Broken unity-gtk3-module:amd64 Conflicts on appmenu-gtk3 [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken activity-log-manager:amd64 Conflicts on activity-log-manager-common [ amd64 ] < 0.9.4-0ubuntu6.2 > ( utils ) Broken libgtksourceview-3.0-0:amd64 Depends on libgtksourceview-3.0-common [ amd64 ] < 3.6.3-0ubuntu1 -> 3.8.2-0ubuntu1 > ( libs ) (< 3.7) Broken icaclient:amd64 Depends on lib32asound2 [ amd64 ] < 1.0.25-4ubuntu3.1 > ( libs ) Broken libunity-core-6.0-5:amd64 Depends on unity-services [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 -> 7.1.2+13.10.20131014.1-0ubuntu1 > ( gnome ) (= 7.0.0daily13.06.19~13.04-0ubuntu1) Broken libbamf3-1:amd64 Depends on bamfdaemon [ amd64 ] < 0.4.0daily13.06.19~13.04-0ubuntu1 -> 0.5.1+13.10.20131011-0ubuntu1 > ( libs ) (= 0.4.0daily13.06.19~13.04-0ubuntu1) Broken apache2-bin:amd64 Conflicts on apache2.2-bin [ amd64 ] < 2.2.22-6ubuntu5.1 -> 2.4.6-2ubuntu2 > ( httpd ) (< 2.3~) Output for cat /etc/apt/sources.list /etc/apt/sources.list.d/*.list # deb cdrom:[Ubuntu 13.04 _Raring Ringtail_ - Release amd64 (20130424)]/ raring main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://de.archive.ubuntu.com/ubuntu/ raring universe deb http://de.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://de.archive.ubuntu.com/ubuntu/ raring multiverse deb http://de.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner # deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main # deb-src http://extras.ubuntu.com/ubuntu raring main # deb http://linux.dropbox.com/ubuntu precise main output for sudo dpkg -l | grep -e "^iU" -e "^rc": rc ibm-lotus-cae 8.5.2-20100805.0821 i386 IBM Lotus Composite Application Editor rc ibm-lotus-cae-nl1 8.5.2-20100805.0821 i386 IBM Lotus CAE NL1 rc ibm-lotus-feedreader 8.5.2-20100805.0821 i386 Feeds for IBM Lotus Notes 8.5.2 rc ibm-lotus-feedreader-nl1 8.5.2-20100805.0821 i386 IBM Lotus Feed Reader NL1 rc ibm-lotus-notes 8.5.2-20100805.0821 i386 IBM Lotus Notes rc ibm-lotus-notes-core-de 8.5.2-20100805.0821 i386 IBM Lotus Notes Native German (de) rc ibm-lotus-notes-nl1 8.5.2-20100805.0821 i386 IBM Lotus Notes Java NL1 rc ibm-lotus-sametime 8.5.2-20100805.0821 i386 IBM Lotus Sametime rc ibm-lotus-symphony 8.5.2-20100805.0821 i386 IBM Lotus Symphony rc ibm-lotus-symphony-nl1 8.5.2-20100805.0821 i386 IBM Lotus Symphony NL1 rc libapache2-mod-php5filter 5.4.9-4ubuntu2.2 amd64 server-side, HTML-embedded scripting language (apache 2 filter module) rc libavcodec53:amd64 6:0.8.6-1ubuntu2 amd64 Libav codec library rc libavutil51:amd64 6:0.8.6-1ubuntu2 amd64 Libav utility library rc libmotif4:amd64 2.3.3-7ubuntu1 amd64 Open Motif - shared libraries rc linux-image-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP

    Read the article

  • Where should I go with hosting my site: VPS, GAE, another option?

    - by Jonathan Hayward
    My website, http://JonathansCorner.com/, began life before 1994 as www.imsa.edu/~jhayward/ and has been through various iterations and improvements to content, HTML, and the like, but remains a literature site that is from a web administrator's perspective fairly simple and primitive: a fair amount of static HTML and supporting files, a little bit of CGI and URI rewriting, .htaccess files providing Expires: headers and the like. An associated site demoes various CGI scripts that fall under the category of "and other creations"; the site as a whole has the purpose of sharing my creative works, and so far a fairly rudimentary use of Apache functionality, supported by Unix tools to, for instance, update RSS feed and the "starting point" link on the home page, has served that purpose fairly well. I looked around here on web hosting, and found the note on web host reccommendations as a good note for "What are some of people's favorite web hosts overall," but I wanted to ask a more focused question of "What are the best web hosts for criteria XYZ:" I am looking at a VPS so I will have root, be able to install stuff and edit Apache's config files etc., running Gentoo or other Linux, BSD, or the like. I would like a system that is secure enough that the host's vulnerabilities are mostly the ones that come along with what I am trying to do: that is, I won't be trying to administer and secure an ancient Linux like some have complained about at 1and1. I would like good uptime/reliability and competent support staff: if the level 1 help desk is going to tell me to go to "My Computer" on a Linux box, I'd like to be able to get past them. Ideally I would like a site hosted within some place that will have low latency for U.S. visitors in particular. I would like a hosting solution that will be with a stable business, one that will probably be around, and one unlikely to vanish without warning. With those things specified, I would be interested in knowing what are the less expensive options. (I expect that some of the things I've specified will knock out all of the cheapest options, but I'm still interested in price.) With all that stated, I'd like to back up a bit and look at whether I am asking the right question. I am concerned that the above is a very good way of asking, "How can I keep my site in line with the wave of the past?" I am wondering if it might be specifically wiser to look to adapt my site to newer technologies instead of trying to keep it on older technologies. For instance, while I would hardly portray my site as a way to show off the full power of Google App Engine, the main site at least should be a straightforward port if I were to do that. And beyond Google App Engine, my knowledge of cloud solutions is basic. If it is a better and more future-proof solution to port my site to another kind of solution, I would be interested in knowing where those future-proof solutions lie. So I would be interested in wisdom. If the question I asked in detail is still a good question to be asking, what would people suggest? Or if I should seriously consider porting my site to a newer basic option, what should I try there? Any thoughts would be appreciated.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • nginx + php-fpm cycle redirection error on linode new vps

    - by chifliiiii
    I'm new to nginx, and I'm trying to make my first server run. I followed this guide as I'm trying to use it for a multisite wordpress site. After installing everything, I get a 500 Internal server error. If I check logs, I see this: 012/09/27 08:55:54 [error] 11565#0: *8 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: xxx.xxx.xxx.xxx, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "www.mydomain.com" 2012/09/27 08:59:32 [error] 11618#0: *1 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: xxx.xxx.xxx.xxx, server: localhost, request: "GET /phpmyadmin HTTP/1.1", host: "www.mydomain.com" My conf files are the following: nano /etc/nginx/sites-available/mydomain.com server { listen 80 default_server; server_name mydomain.com *.mydomain.com; root /srv/www/aciup.com/public; access_log /srv/www/mydomain.com/log/access.log; error_log /srv/www/mydomain.com/log/error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { client_max_body_size 25M; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } nano /etc/nginx/nginx.conf user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; keepalive_timeout 5; tcp_nodelay on; server_tokens off; gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Any help will be appreciated.

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • How to begin? Windows 8 Development

    - by Dennis Vroegop
    Ok. I convinced you in my last post to do some Win8 development. You want a piece of that cake, or whatever your reasons may be. Good! Welcome to the club! Now let me ask you a question: what are you going to write? Ah. That’s the big one, isn’t it? What indeed? If you have been creating applications for computers before you’re in for quite a shock. The way people perceive apps on a tablet is quite different from what we know as applications. There’s a reason we call them apps instead of applications! Yes, technically they are applications but we don’t call them apps only because it sounds cool. The abbreviated form of the word applications itself is a pointer. Apps are small. Apps are focused. Apps are more lightweight. Apps do one thing but they do that one thing extremely good. In the ‘old’ days we wrote huge systems. We build ecosystems of services, screens, databases and more to create a system that provides value for the user. Think about it: what application do you use most at work? Can you in one sentence describe what it is, or what it does and yet still distinctively describe its purpose? I doubt you can. Let’s have a look at Outlouk. We all know it and we all love or hate it. But what is it? A mail program? No, there’s so much more there: calendar, contacts, RSS feeds and so on. Some call it a ‘collaboration’  application but that’s not really true as well. After all, why should a collaboration application give me my schedule for the day? I think the best way to describe Outlook is “client for Exchange”  although that isn’t accurate either. Anyway: Outlook is a great application but it’s not an ‘app’ and therefor not very suitable for WinRT. Ok. Disclaimer here: yes, you can write big applications for WinRT. Some will. But that’s not what 99.9% of the developers will do. So I am stating here that big applications are not meant for WinRT. If 0.01% of the developers think that this is nonsense then they are welcome to go ahead but for the majority here this is not what we’re talking about. So: Apps are small, lightweight and good at what they do but only at that. If you’re a Phone developer you already know that: Phone apps on any platform fit the description I have above. If you’ve ever worked in a large cooperation before you might have seen one of these before: the Mission Statement. It’s supposed to be a oneliner that sums up what the company is supposed to do. Funny enough: although this doesn’t work for large companies it does work for defining your app. A mission statement for an app describes what it does. If it doesn’t fit in the mission statement then your app is going to get to big and will fail. A statement like this should be in the following style “<your app name> is the best app to <describe single task>” Fill in the blanks, write it and go! Mmm.. not really. There are some things there we need to think about. But the statement is a very, very important one. If you cannot fit your app in that line you’re preparing to fail. Your app will become to big, its purpose will be unclear and it will be hard to use. People won’t download it and those who do will give it a bad rating therefor preventing that huge success you’ve been dreaming about. Stick to the statement! Ok, let’s give it a try: “PlanesAreCool” is the best app to do planespotting in the field. You might have seen these people along runways of airports: taking photographs of airplanes and noting down their numbers and arrival- and departure times. We are going to help them out with our great app! If you look at the statement, can you guess what it does? I bet you can. If you find out it isn’t clear enough of if it’s too broad, refine it. This is probably the most important step in the development of your app so give it enough time! So. We’ve got the statement. Print it out, stick it to the wall and look at it. What does it tell you? If you see this, what do you think the app does? Write that down. Sit down with some friends and talk about it. What do they expect from an app like this? Write that down as well. Brainstorm. Make a list of features. This is mine: Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. Now, I must make it clear that I am not a planespotter nor do I know what one does. So if the above list makes no sense, I apologize. There is a lesson: write apps for stuff you know about…. First of all, let’s look at our statement and then go through the list of features. Remove everything that has nothing to do with that statement! If you end up with an empty list, try again with both steps. Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. That's better. The things I removed could be pretty useful to a plane spotter and could be fun to write. But do they match the statement? I said that the app is for spotting in the field, so “look up airfields” doesn’t belong there: I know where I am so why look it up? And the same goes for inviting friends or looking up the weather conditions for tomorrow. I am at the airfield right now, looking through my binoculars at the planes. I know the weather now and I don’t care about tomorrow. If you feel the items you’ve crossed out are valuable, then why not write another app? One that says “SpotNoter” is the best app for preparing a day of spotting with my friends. That’s a different app! Remember: Win8 apps are small and very good at doing ONE thing, and one thing only! If you have made that list, it’s time to prepare the navigation of your app. The navigation is how users see your app and how they use it. We’ll do that next time!

    Read the article

  • Nginx Multiple Domains

    - by showFocus
    I am trying to add a second virtual host to nginx. When i go to the new domain it redirects to the old one. I have tried restarting Nginx, rebooting the server. Has anyone come across this before, care to share? File: nginx.conf ### user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } File: ../sites-enabled/domain1.co.uk server { listen 80; server_name www.domain1.co.uk; rewrite ^/(.*) http://domain1.co.uk/$1 permanent; } server { listen 80; server_name domain1.co.uk; access_log /home/me/public_html/domain1.co.uk/log/access.log; error_log /home/me/public_html/domain1.co.uk/log/error.log; location / { root /home/me/public_html/domain1.co.uk/public/; index index.php index.html; # WordPress supercache & permalinks. include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain1.co.uk/public/$fastcgi_script_name; } } File: ../sites-enabled/domain2.co.uk server { listen 80; server_name www.domain2.co.uk; rewrite ^/(.*) http://domain2.co.uk/$1 permanent; } server { listen 80; server_name domain2.co.uk; access_log /home/me/public_html/domain2.co.uk/log/access.log; error_log /home/me/public_html/domain2.co.uk/log/error.log; location / { root /home/me/public_html/domain2.co.uk/public/; index index.php index.html; # Basic version of WordPress parameters, supporting nice permalinks. # include /usr/local/nginx/conf/wordpress_params.regular; # Advanced version of WordPress parameters supporting nice permalinks and WP Super Cache plugin include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain2/public/$fastcgi_script_name; } }

    Read the article

  • Setting environment variables in OS X

    - by Percival Ulysses
    Despite the warning that questions that can be answered are preferred, this question is more a request for comments. I apologize for this, but I feel that it is valuable nonetheless. The problem to set up environment variables such that they are available for GUI applications has been around since the dawn of Mac OS X. The solution with ~/.MacOSX/environment.plist never satisfied me because it was not reliable, and bash style globbing wasn't available. Another solution is the use of Login Hooks with a suitable shell script, but these are deprecated. The Apple approved way for such functionality as provided by login hooks is the use of Launch Agents. I provided a launch agent that is located in /Library/LaunchAgents/: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>user.conf.launchd</string> <key>Program</key> <string>/Users/Shared/conflaunchd.sh</string> <key>ProgramArguments</key> <array> <string>~/.conf.launchd</string> </array> <key>EnableGlobbing</key> <true/> <key>RunAtLoad</key> <true/> <key>LimitLoadToSessionType</key> <array> <string>Aqua</string> <string>StandardIO</string> </array> </dict> </plist> The real work is done in the shell script /Users/Shared/conflaunchd.sh, which reads ~/.conf.launchd and feeds it to launchctl: #! /bin/bash #filename="$1" filename="$HOME/.conf.launchd" if [ ! -r "$filename" ]; then exit fi eval $(/usr/libexec/path_helper -s) while read line; do # skip lines that only contain whitespace or a comment if [ ! -n "$line" -o `expr "$line" : '#'` -gt 0 ]; then continue; fi eval launchctl $line done <"$filename" exit 0 Notice the call of path_helper to get PATH set up right. Finally, ~/.conf.launchd looks like that setenv PATH ~/Applications:"${PATH}" setenv TEXINPUTS .:~/Documents/texmf//: setenv BIBINPUTS .:~/Documents/texmf/bibtex//: setenv BSTINPUTS .:~/Documents/texmf/bibtex//: # Locale setenv LANG en_US.UTF-8 These are launchctl commands, see its manpage for further information. Works fine for me (I should mention that I'm still a Snow Leopard guy), GUI applications such as texstudio can see my local texmf tree. Things that can be improved: The shell script has a #filename="$1" in it. This is not accidental, as the file name should be feeded to the script by the launch agent as an argument, but that doesn't work. It is possible to put the script in the launch agent itsself. I am not sure how secure this solution is, as it uses eval with user provided strings. It should be mentioned that Apple intended a somewhat similar approach by putting stuff in ~/launchd.conf, but it is currently unsupported as to this date and OS (see the manpage of launchd.conf). I guess that things like globbing would not work as they do in this proposal. Finally, I would mention the sources I used as information on Launch Agents, but StackExchange doesn't let me [1], [2], [3]. Again, I am sorry that this is not a real question, I still hope it is useful.

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • website connection reset on first load

    - by Tar
    i'm using nginx with php-cgi. lately a problem has arose where if you don't view my site for a while, like 3-4 minutes, and then open it again, the first request you send will return connection reset by peer in the browser. if you refresh, operation is normal for all subsequent requests. this happens every time and it isn't just an isolated incident, it happens to everyone using my site. i've tried to restart nginx and php-cgi but to no avail. does anyone know what the problem could be? i can provide whatever information necessary. it's worth noting that there's nothing in error log besides that message about client closing the connection early. nginx.conf user nobody; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; error_page 404 /404.html; error_page 403 /403.html; error_page 444 /444.html; error_page 502 /502.html; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; large_client_header_buffers 8 8k; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 30; server_tokens off; gzip on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 64 8k; gzip_min_length 1024; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; } default.conf server { listen 80; server_name domain.com; error_log /var/log/nginx/error.log debug; access_log /var/log/nginx/access.log; location / { if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } if ($http_user_agent ~* Havij|hvj|acunetix|wget|HTtrack) { return 403; } root /home/admin06/public_html; autoindex off; index index.php; # Images and static content is treated different location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /home/admin06/public_html; } location /nginx_status { stub_status on; access_log off;] deny all; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.*)$; #try_files $uri =404; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/site/public_html$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 60; fastcgi_read_timeout 60; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } location ~ error_log { deny all; } location ~ access_log { deny all; } location ~ \.cgi { deny all; } location ~ \.db { deny all; } }

    Read the article

  • CodePlex Daily Summary for Wednesday, September 05, 2012

    CodePlex Daily Summary for Wednesday, September 05, 2012Popular ReleasesDesktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.BLACK ORANGE: HPAD TEXT EDITOR 0.9 Beta: HOW TO RUN THE TEXT EDITOR Download the HPAD ARCHIVED FILES which is in .rar format Extract using Winrar Make sure that extracted files are in the same folder Double-Click on HPAD.exe application fileTelerikMvcGridCustomBindingHelper: Version 1.0.15.247-RC2: TelerikMvcGridCustomBindingHelper 1.0.15.247 RC2 Release notes: This is a RC version (hopefully the last one), please test and report any error or problem you encounter. This release is all about performance and fixes Support: "Or" and "Does Not contain" filter options Improved BooleanSubstitutes, Custom Aggregates and expressions-to-queryover Add EntityFramework examples in ExampleWebApplication Many other improvements and fixes Fix invalid cast on CustomAggregates Support for ...ServiceMon - Extensible Real-time, Service Monitoring Utility: ServiceMon Release 0.9.0.44: Auto-uploaded from build serverJavaScript Grid: Release 09-05-2012: Release 09-05-2012xUnit.net Contrib: xunitcontrib-dotCover 0.6.1 (dotCover 2.1 beta): xunitcontrib release 0.6.1 for dotCover 2.1 beta This release provides a test runner plugin for dotCover 2.1 beta, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release adds support for running xUnit.net tests to dotCover 2.1 beta's Visual Studio plugin. PLEASE NOTE: You do NOT need this if you also have ReSharper and the existing 0.6.1 release installed. DotCover will use ReSharper's test runners if available. This release includes th...B INI Sharp Library: B INI Sharp Library v1.0.0.0 Realsed: The frist realsedActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsA Simple Eng-Hindi CMS: A simple English- Hindi dual language content management system for small business/personal websites.Active Social Migrator: This project for managing the Active Social migration tool.ANSI Console User Control: Custom console control for .NET WinformsAutoSPInstallerGUI: GUI Configuration Tool for SPAutoInstaller Codeplex ProjectCode Documentation Checkin Policy: This checkin policy for Visual Studio 2012 checks if c# code is documented the way it's configured in the config of the policy. Code Dojo/Kata - Free Time Coding: Doing some katas of the Coding Dojo page. http://codingdojo.org/cgi-bin/wiki.pl?KataCataloguefjycUnifyShow: fjycUnifyShowHidden Capture (HC): HC is simple and easy utility to hidden and auto capture desktop or active windowHRC Integration Services: Fake SQL Server Integration Services. LOLKooboo CMS Sites Switcher: Kooboo CMS Sites SwitcherMod.CookieDetector: Orchard module for detecting whether cookies are enabledMyCodes: Created!MySQL Statement Monitor: MySQL Statement Monitor is a monitoring tool that monitors SQL statements transferred over the network.NeoModulusPIRandom: The idea with PI Random is to use easy string manipulation and simple math to generate a pseudo random number. Net Core Tech - Medical Record System: This is a Medical Record System ProjectOraPowerShell: PowerShell library for backup and maintenance of a Oracle Database environment under Microsoft Windows 2008PinDNN: PinDNN is a module that imparts Pinterest-like functionality to DotNetNuke sites. This module works with a MongoDB database and uses the built-in social relatioPyrogen Code Generator: PyroGen is a simple code generator accepting C# as the markup language.restMs: wil be deleted soonScript.NET: Script.NET is a script management utility for web forms and MVC, using ScriptJS-like features to link dependencies between scripts.SpringExample-Pagination: Simple Spring example with PaginationXNA and Component Based Design: This project includes code for XNA and Component Based Design

    Read the article

  • RedHat 5.5 server does not show per processor memory utilization

    - by Mike S
    I have been searching all over internet but not finding any leads. I have a system with a memory leak that I am trying to troubleshoot. Unfortunately I am not able to see per processor memory utilization. Here are the outputs of TOP and PS commands. Linux SERVER_NAME 2.6.18-194.8.1.el5 #1 SMP Wed Jun 23 10:52:51 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux top - 09:17:13 up 18:43, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 375 total, 1 running, 373 sleeping, 0 stopped, 1 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32922828k total, 32776712k used, 146116k free, 267128k buffers Swap: 5245212k total, 0k used, 5245212k free, 32141044k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 10348 744 620 S 0.0 0.0 0:05.65 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.05 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 % ps -auxf | sort -nr -k 4 | head -10 Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ xfs 6205 0.0 0.0 23316 3892 ? Ss Aug19 0:00 xfs -droppriv -daemon uuidd 6101 0.0 0.0 60976 224 ? Ss Aug19 0:00 /usr/sbin/uuidd USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND smmsp 6130 0.0 0.0 57900 1784 ? Ss Aug19 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue rpc 5126 0.0 0.0 8052 632 ? Ss Aug19 0:00 portmap root 99 0.0 0.0 0 0 ? S< Aug19 0:00 [events/1] root 98 0.0 0.0 0 0 ? S< Aug19 0:00 [events/0] root 97 0.0 0.0 0 0 ? S< Aug19 0:00 [watchdog/31] root 96 0.0 0.0 0 0 ? SN Aug19 0:00 [ksoftirqd/31] root 95 0.0 0.0 0 0 ? S< Aug19 0:00 [migration/31] Any help with this is appretiate.

    Read the article

  • configuring uppercut for automated build

    - by deepasun
    This is my cc.net's config file. http://confluence.public.thoughtworks.org/display/CCNET/Configuration+Preprocessor -- -- -- <!-- PROJECT STRUCTURE --> <cb:define name="WindowsFormsApplication1"> <project name="$(projectName)"> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> <artifactDirectory>$(drop_directory)\$(projectName)</artifactDirectory> <category>$(projectName)</category> <queuePriority>$(queuePriority)</queuePriority> <triggers> <intervalTrigger name="continuous" seconds="60" buildCondition="IfModificationExists" /> </triggers> <sourcecontrol type="svn"> <executable>c:\program files\subversion\bin\svn.exe</executable> <!--<trunkUrl>http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1</trunkUrl>--> <trunkUrl>$(svnPath)</trunkUrl> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> </sourcecontrol> <tasks> <exec> <executable>$(working_directory)\$(projectName)\build.bat</executable> </exec> </tasks> <publishers> <merge> <files> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\*.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\mbunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\nunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ncover\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ndepend\*.xml</file> </files> </merge> <!--<email from="[email protected]" mailhost="smtp.somewhere.com" includeDetails="TRUE"> <users> <user name="YOUR NAME" group="BuildNotice" address="[email protected]" /> </users> <groups> <group name="BuildNotice" notification="change" /> </groups> </email>--> <xmllogger/> <statistics> <statisticList> <firstMatch name="Svn Revision" xpath="//modifications/modification/changeNumber" /> <firstMatch name="ILInstructions" xpath="//ApplicationMetrics/@NILInstruction" /> <firstMatch name="LinesOfCode" xpath="//ApplicationMetrics/@NbLinesOfCode" /> <firstMatch name="LinesOfComment" xpath="//ApplicationMetrics/@NbLinesOfComment" /> </statisticList> </statistics> <modificationHistory onlyLogWhenChangesFound="true" /> <rss/> </publishers> </project> </cb:define> <cb:WindowsFormsApplication1 projectname="WindowsFormsApplication1" queuepriority="80" svnpath="http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1" /> It is not producing the build directory in code_drop, but updating reports.xml with updated build.. wht is the problem?

    Read the article

  • Windows 7 disk errors after a few hours of runtime

    - by GFK
    I'm having trouble understanding what is going on with my work PC. Whenever I boot it, it runs fine for a while, then starts to randomly show disk errors. The displayed error often contains the message "not enough storage is available to process this command", although depending on the application that fails it can be different. This has happened for weeks now and is getting worse. This is what troubles me: It never seems to impact critical parts of the system (no BSOD, no freeze). Only some applications seem impacted, refusing to function correctly after a while: Outlook 2010 cannot download RSS feeds anymore, Firefox 6 or IE9 cannot download anything bigger than 3MB without failing, Windows Update fails, all msi installers fail, Visual Studio 2010 starts failing in weird manners... It only happens after a while using it (typically 3 hours, but it seems that installing a program or compiling several times makes it shorter) Rebooting solves it (temporarily). The system: The OS is Windows 7 Pro Spanish SP1, 32 bits The system is an HP Compaq 6000 Pro with 4 GB memory (only 3.4GB usable since the system is 32bit), one 500GB hard drive. Installed applications include: Visual Studio 2010, SQL Server 2008 R2, VMWare Workstation 7, Microsoft Security Essentials, Office 2010. Shutting down all related services and processes doesn't seem to change anything. The diagnostics I've run so far: Hard drive : 465GB, 165GB free Process Explorer : physical and virtual memory seem ok (pagefile is 5.3GB, physical memory usage 70%, system commit 39%) Windows Memory diagnostic tool: OK CHKDSK returned: 488282111 KB total disk space. 281668248 KB in 265779 files. 150188 KB in 62949 indexes. 0 KB in bad sectors. 571755 KB in use by the system. The log file has occupied 65536 kilobytes. 205891920 KB available on disk. For non-spanish speakers, that means all ok. SMART diagnostic tools (DiskCheckup) report all values normal. temperatures are in the normal range (HWinfo). The event viewer doesn't seem to contain any significant message. ran CCleaner 3, without any noticeable effect. I was thinking about some file number limit (between Visual Studio projects and other applications, there are around 300.000 files on the hard drive), but I couldn't find any. It's possible there is something related with the use of the temporary folders (it's the only explanation I have for why applications fail but Windows doesn't), but I cannot confirm that. Only thing I cannot find out is if chkdsk reporting 65MB for the log is normal. It seems since Vista it always reports this. Any other cleaning/diagnostic tool you might know of? Edit: I ran several other tools since I first published the question: Seagate SeaTools (the HD manufacturer's analysis tool): complete test run OK. Intel Rapid 10.1 (the HD controller manufacturer's troubleshooting tool): the HD's ok. Microsoft Desktop Heap Monitor: Desktop Heap Information Monitor Tool (Version 8.1.2925.0) Copyright (c) Microsoft Corporation. All rights reserved. Session ID: 1 Total Desktop: ( 46464 KB - 11 desktops) WinStation\Desktop Heap Size(KB) Used Rate(%) WinSta0\Winlogon (s1) 128 3.6 WinSta0\Disconnect (s1) 64 3.8 WinSta0\Default (s1) 20480 3.0 msswindowstation\mssrestricteddesk (s0) 1024 0.2 __X78B95_89_IW__A8D9S1_42_ID (s0) 1024 0.2 Service-0x0-3e5$\Default (s0) 1024 0.6 Service-0x0-3e4$\Default (s0) 1024 0.3 Service-0x0-3e7$\Default (s0) 1024 2.1 WinSta0\Winlogon (s0) 128 1.9 WinSta0\Disconnect (s0) 64 3.8 WinSta0\Default (s0) 20480 0.0 All ok, desktop heap usage < 5% Edit 2: I tried totally resetting my account by creating a new one, logging under this new one and delete the first one (local rights and files), then logging back with this deleted account (it is a domain account). No luck. Also, I found out often the error is "not enough storage is available to process this command". Searching on the internet, I found an old troubleshooting tip (setting a registry key to raise the IRP stack limit, whatever it is) which did not change anything.

    Read the article

  • nginx + php-fpm - where are my $_GET params?

    - by egis
    I have a strange problem here. I just moved from apache + mod_php to nginx + php-fpm. Everything went fine except this one problem. I have a site, let's say example.com. When I access it like example.com?test=get_param $_SERVER['REQUEST_URI'] is /?test=get_param and there is a $_GET['test'] also. But when I access example.com/ajax/search/?search=get_param $_SERVER['REQUEST_URI'] is /ajax/search/?search=get_param yet there is no $_GET['search'] (there is no $_GET array at all). I'm using Kohana framework. which routes /ajax/search to controller, but I've put phpinfo() at index.php so I'm checking for $_GET variables before framework does anything (this means that disapearing get params aren't frameworks fault). My nginx.conf is like this worker_processes 4; pid logs/nginx.pid; events { worker_connections 1024; } http { index index.html index.php; autoindex on; autoindex_exact_size off; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; error_log logs/error.log debug; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 2; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include sites-enabled/*; } and example.conf is like this server { listen 80; server_name www.example.com; rewrite ^ $scheme://example.com$request_uri? permanent; } server { listen 80; server_name example.com; root /var/www/example/; location ~ /\. { return 404; } location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } location ~* ^/(modules|application|system) { return 403; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ { access_log off; expires 30d; } } fastcgi_params is like this fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param PATH_INFO $fastcgi_path_info; What is the problem here? By the way there are few more sites on the same server, both Kohana based and plain php, that are working perfectly.

    Read the article

  • nginx + php-fpm - where are my $_GET params?

    - by egis
    Hello everyone, I have a strange problem here. I just moved from apache + mod_php to nginx + php-fpm. Everything went fine except this one problem. I have a site, let's say example.com. When I access it like example.com?test=get_param $_SERVER['REQUEST_URI'] is /?test=get_param and there is a $_GET['test'] also. But when I access example.com/ajax/search/?search=get_param $_SERVER['REQUEST_URI'] is /ajax/search/?search=get_param yet there is no $_GET['search'] (there is no $_GET array at all). I'm using Kohana framework. which routes /ajax/search to controller, but I've put phpinfo() at index.php so I'm checking for $_GET variables before framework does anything (this means that disapearing get params aren't frameworks fault). My nginx.conf is like this worker_processes 4; pid logs/nginx.pid; events { worker_connections 1024; } http { index index.html index.php; autoindex on; autoindex_exact_size off; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; error_log logs/error.log debug; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 2; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include sites-enabled/*; } and example.conf is like this server { listen 80; server_name www.example.com; rewrite ^ $scheme://example.com$request_uri? permanent; } server { listen 80; server_name example.com; root /var/www/example/; location ~ /\. { return 404; } location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } location ~* ^/(modules|application|system) { return 403; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ { access_log off; expires 30d; } } fastcgi_params is like this fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param PATH_INFO $fastcgi_path_info; What is the problem here? By the way there are few more sites on the same server, both Kohana based and plain php, that are working perfectly.

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

  • Rendering Flickr Cats Via Backbone.js

    - by Geertjan
    Create a JavaScript file and refer to it inside an HTML file. Then put this into the JavaScript file: (function($) {     var CatCollection = Backbone.Collection.extend({         url: 'http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?',         parse: function(response) {             return response.items;         }     });     var CatView = Backbone.View.extend({         el: $('body'),         initialize: function() {             _.bindAll(this, 'render');             carCollectionInstance.fetch({                 success: function(response, xhr) {                     catView.render();                 }             });         },         render: function() {             $(this.el).append("<ul></ul>");             for (var i = 0; i < carCollectionInstance.length; i++) {                 $('ul', this.el).append("<li>" + i + carCollectionInstance.models[i].get("description") + "</li>");             }         }     });     var carCollectionInstance = new CatCollection();     var catView = new CatView(); })(jQuery); Apologies for any errors or misused idioms. It's my second day with Backbone.js, in fact, my second day with JavaScript. I haven't seen anywhere online so far where an example such as the above is found, though plenty that do kind of or pieces of the above, or explain in text, without an actual full example. The next step, and the only reason for the above experiment, is to create some JPA entities and expose them via RESTful webservices created on EJB methods, for consumption into an HTML5 application via a Backbone.js script very similar to the above. 

    Read the article

  • Optimizing apache server load

    - by Jevgeni Smirnov
    We have an issue with a dedicated server load. We have 16 processors with 4 core @ 2.40GHz, if I understood correctly cat /proc/cpuinfo output. Unfortunately, I don't have access to free -m or vmstat. But from top I got that we have 24 GB. And snapshot from top about processes: As far as I see, memory is not used at all. But the cpu is used heavily. Apache consumes most of CPU. Another useful piece of information: Every 1.0s: ps u -C httpd,mysqld,php Tue Mar 27 10:48:19 2012 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 7476 0.0 0.1 446808 37880 ? SNs Mar06 0:43 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf mysql 36061 41.6 2.1 1113672 529876 ? SNl Feb20 21503:48 /opt/zone/sbin/mysqld --basedir=/opt/zone --datadir=/srvdata/mysql --user=mysql --log-error=/srvdata/mysql/dn79.err --pid-file=/srvdata/mysql/mysqld.pid --socket=/tmp/mysql.sock --port=3306 root 37257 0.0 0.0 424056 16840 ? SNs Mar22 1:03 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 52743 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52744 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52745 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52746 0.0 0.1 447100 30360 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52747 0.0 0.1 446956 30324 ? SN 10:40 0:00 /opt/zone/sbin/httpd -D SSL -D SLOT_ID0 -f /etc/opt/zone/apache/ssl_httpd.conf http 52980 69.1 1.8 852468 458088 ? RN 10:41 5:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 53483 47.0 0.8 615088 221040 ? RN 10:43 2:05 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 53641 1.8 0.2 446580 54632 ? SN 10:45 0:03 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54384 81.2 0.9 625828 229972 ? RN 10:45 2:14 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54411 47.7 0.5 535992 142416 ? RN 10:45 1:09 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54470 41.7 0.4 512528 120012 ? RN 10:46 0:54 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54475 0.1 0.1 437016 41528 ? SN 10:46 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54486 1.5 0.2 445636 53916 ? SN 10:46 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54531 2.5 0.2 445424 53012 ? SN 10:46 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54549 0.0 0.0 424188 9188 ? SN 10:46 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54642 0.0 0.0 424188 9200 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54651 0.0 0.0 424188 9188 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54661 0.0 0.0 424188 9208 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54663 6.9 0.2 449936 58560 ? SN 10:47 0:03 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54666 6.0 0.2 453356 61124 ? SN 10:47 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54667 2.8 0.1 437608 42088 ? SN 10:47 0:01 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54670 1.5 0.1 437540 42172 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54672 2.1 0.1 439076 43648 ? SN 10:47 0:01 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54709 0.0 0.0 424188 9192 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54711 1.0 0.1 437284 41780 ? SN 10:47 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54712 11.8 0.2 448172 54700 ? SN 10:47 0:02 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54720 0.0 0.0 424188 9192 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54721 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54747 9.1 0.2 443568 51848 ? SN 10:48 0:01 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54782 1.8 0.1 438708 37896 ? RN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54784 0.0 0.0 424188 9180 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54785 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54789 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54790 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54791 0.0 0.0 424188 9188 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 http 54792 0.0 0.0 424056 8352 ? SN 10:48 0:00 /opt/zone/sbin/httpd -f /etc/opt/zone/apache/httpd.conf -D SLOT_ID0 Webalizer shows following: What can be done in the following situation? The application is Magento.

    Read the article

  • Nginx + PHP-FPM executes script, but returns 404

    - by MorfiusX
    I am using Nginx + PHP-FPM to run a Wordpress based site. I have a URL that should return dynamically generated JSON data for use with the DataTables jQuery plugin. The data is returned properly, but with a return code of 404. I think this is a Nginx config issue, but I haven't been able to figure out why. The script 'getTable.php' works properly on the production version of the site which is currently using Apache. Anyone know how I can get this to work on Nginx? URL: http://dev.iloveskydiving.org/wp-content/plugins/ils-workflow/lib/getTable.php SERVER: CentOS 6 + Varnish (caching disabled for development) + Nginx + PHP-FPM + Wordpress + W3 Total Cache Nginx Config: server { # Server Parameters listen 127.0.0.1:8082; server_name dev.iloveskydiving.org; root /var/www/dev.iloveskydiving.org/html; access_log /var/www/dev.iloveskydiving.org/logs/access.log main; error_log /var/www/dev.iloveskydiving.org/logs/error.log error; index index.php; # Rewrite minified CSS and JS files location ~* \.(css|js) { if (!-f $request_filename) { rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; expires max; } } # Set a variable to work around the lack of nested conditionals set $cache_uri $request_uri; # Don't cache uris containing the following segments if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $cache_uri "no cache"; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") { set $cache_uri 'no cache'; } # Use cached or actual file if they exists, otherwise pass request to WordPress location / { try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args; } # Cache static files for as long as possible location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { try_files $uri =404; expires max; access_log off; } # Deny access to hidden files location ~* /\.ht { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/lib/php-fpm/php-fpm.sock; # port where FastCGI processes were spawned } } Fast CGI Params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; UPDATE: Upon further digging, it looks like Nginx is generating the 404 and PHP-FPM is executing the script properly and returning a 200. UPDATE: Here are the contents of the script: <?php /** * Connect to Wordpres */ require(dirname(__FILE__) . '/../../../../wp-blog-header.php'); /** * Define temporary array */ $aaData = array(); $aaData['aaData'] = array(); /** * Execute Query */ $query = new WP_Query( array( 'post_type' => 'post', 'posts_per_page' => '-1' ) ); foreach ($query->posts as $post) { array_push( $aaData['aaData'], array( $post->post_title ) ); } /** * Echo JSON encoded array */ echo json_encode($aaData);

    Read the article

  • Nginx .zip files return 404

    - by Kenley Tomlin
    I have set up Nginx as a reverse proxy for Node and to serve my static files and user uploaded images. Everything is working beautifully except that I can't understand why Nginx can't find my .zip files. Here is my nginx.conf. user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; proxy_cache_path /var/www/web_cache levels=1:2 keys_zone=ooparoopaweb_cache:8m max_size=1000m inactive=600m; sendfile on; upstream *******_node { server 172.27.198.66:8888 max_fails=3 fail_timeout=20s; #fair weight_mode=idle no_rr } upstream ******_json_node { server 172.27.176.57:3300 max_fails=3 fail_timeout=20s; } server { #REDIRECT ALL HTTP REQUESTS FOR FRONT-END SITE TO HTTPS listen 80; server_name *******.com www.******.com; return 301 https://$host$request_uri; } server { #MOBILE APPLICATION PROXY TO NODE JSON listen 3300 ssl; ssl_certificate /*****/*******/json_ssl/server.crt; ssl_certificate_key /*****/******/json_ssl/server.key; server_name json.*******.com; location / { proxy_pass http://******_json_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { #******.COM FRONT-END SITE PROXY TO NODE WEB SERVER listen 443 ssl; ssl_certificate /***/***/web_ssl/********.crt; ssl_certificate_key /****/*****/web_ssl/myserver.key; server_name mydomain.com www.mydomain.com; add_header Strict-Transport-Security max-age=500; location / { gzip on; gzip_types text/html text/css application/json application/x-javascript; proxy_pass http://mydomain_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { #ADMIN SITE PROXY TO NODE BACK-END listen 80; server_name admin.mydomain.com; location / { proxy_pass http://mydomain_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { # SERVES STATIC FILES listen 80; listen 443 ssl; ssl_certificate /**/*****/server.crt; ssl_certificate_key /****/******/server.key; server_name static.domain.com; access_log static.domain.access.log; root /var/www/mystatic/; location ~*\.(jpeg|jpg|png|ico)$ { gzip on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype image/png image/jpeg application/zip; expires 10d; add_header Cache-Control public; } location ~*\.zip { #internal; add_header Content-Type "application/zip"; add_header Content-Disposition "attachment; filename=gamezip.zip"; } } } include tcp.conf; Tcp.conf contains settings that allow Nginx to proxy websockets. I don't believe anything contained within it is relevant to this question. I also want to add that I want the zip files to be a forced download.

    Read the article

  • Unusually high memory usage on a CentOS VPS with 512 guaranteed RAM

    - by Andrei Bârsan
    I'm working on a medium-sized web application written in PHP that's running on a VPS with 512mb ram. The webapp hasn't been officially launched yet, so there isn't too much traffic going on, just me and a few other people working on it. There is another slightly smaller webapp also hosted on this machine, among 4-5 other small static sites. We are running Centos 5 32-bit & cPanel/WHM. This is the result of running ps aux and, as you can see, it's not using 100% of the RAM. However, on the hypanel overview, it's always shown as using aroun 500MB ram, just for running apache, mysql, and the lowest-memory-footprint versions of the mail server, ftp server etc. -bash-3.2# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2156 664 ? Ss 12:08 0:00 init [3] root 1123 0.0 0.0 2260 548 ? S<s 12:08 0:00 /sbin/udevd -d root 1462 0.0 0.0 1812 568 ? Ss 12:08 0:00 syslogd -m 0 named 1496 0.0 0.0 3808 820 ? Ss 12:08 0:00 nsd named 1497 0.0 0.0 10672 756 ? S 12:08 0:00 nsd named 1499 0.0 0.0 3880 584 ? S 12:08 0:00 nsd root 1514 0.0 0.1 7240 1064 ? Ss 12:08 0:00 /usr/sbin/sshd root 1522 0.0 0.0 2832 832 ? Ss 12:08 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 1534 0.0 0.1 3712 1328 ? S 12:08 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql - mysql 1667 0.0 2.9 225680 30884 ? Sl 12:08 0:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql - mailnull 1766 0.0 0.1 9352 1100 ? Ss 12:08 0:00 /usr/sbin/exim -bd -q60m root 1797 0.0 0.0 2156 708 ? Ss 12:08 0:00 /usr/sbin/dovecot root 1798 0.0 0.0 2632 1012 ? S 12:08 0:00 dovecot-auth root 1816 0.0 3.0 38580 32456 ? Ss 12:08 0:01 /usr/local/bin/spamd -d --allowed-ips=127.0.0.1 --pidfi root 1839 0.0 1.6 63200 17496 ? Ss 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 1846 0.0 0.1 5416 1468 ? Ss 12:08 0:00 pure-ftpd (SERVER) root 1848 0.0 0.1 6212 1244 ? S 12:08 0:00 /usr/sbin/pure-authd -s /var/run/ftpd.sock -r /usr/sbin root 1856 0.0 0.1 4492 1112 ? Ss 12:08 0:00 crond root 1864 0.0 0.0 2356 428 ? Ss 12:08 0:00 /usr/sbin/atd dovecot 1927 0.0 0.1 5196 1952 ? S 12:08 0:00 pop3-login dovecot 1928 0.0 0.1 5196 1948 ? S 12:08 0:00 pop3-login dovecot 1929 0.0 0.1 5316 2012 ? S 12:08 0:00 imap-login dovecot 1930 0.0 0.2 5416 2228 ? S 12:08 0:00 imap-login root 1939 0.0 0.1 3936 1964 ? S 12:08 0:00 cPhulkd - processor root 1963 0.0 0.8 15876 8564 ? S 12:08 0:00 cpsrvd (SSL) - waiting for connections root 1966 0.0 0.7 15172 7748 ? S 12:08 0:00 cpdavd - accepting connections on 2077 and 2078 root 1990 0.0 0.2 5008 3136 ? S 12:08 0:00 queueprocd - wait to process a task root 2017 0.0 2.9 38580 31020 ? S 12:08 0:00 spamd child root 2018 0.0 0.5 8904 5636 ? S 12:08 0:00 /usr/bin/perl /usr/local/cpanel/bin/leechprotect nobody 2021 0.0 3.2 66512 33724 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2022 0.0 3.1 67812 33024 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2024 0.0 1.9 64364 20680 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 2027 0.0 0.4 9000 4540 ? S 12:08 0:00 tailwatchd root 2032 0.0 0.1 4176 1836 ? SN 12:08 0:00 cpanellogd - sleeping for logs nobody 3096 0.0 1.9 64572 20264 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3097 0.0 2.8 66008 30136 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3098 0.0 2.8 65704 29752 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3099 0.0 3.1 67260 32816 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL andrei 3448 0.0 0.1 3204 1632 ? S 12:50 0:00 imap nobody 3537 0.0 1.9 64308 20108 ? S 13:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3614 0.0 1.9 64576 20628 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3615 0.0 1.3 63200 14672 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 3626 0.0 0.2 10232 2964 ? Rs 13:14 0:00 sshd: root@pts/0 root 3648 0.0 0.1 3844 1600 pts/0 Ss 13:14 0:00 -bash root 3826 0.0 0.0 2532 908 pts/0 R+ 13:21 0:00 ps aux Lately, without any significant changes to the configuration, the memory usage started peaking and going over 512, causing the virtual server to kill apache, basically murdering our site in the process. Do you have any idea if this is normal and more resources should be acquired? I don't think... since there isn't too much data or traffic online yet.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84  | Next Page >