Search Results

Search found 41882 results on 1676 pages for 'png files'.

Page 375/1676 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • High disk I/O - jbd2/sda2-8 process

    - by Evan Hamlet
    I have run a file server on a CentOS 5.8 final server. My only concern at the moment is what appears to be intermittent but continuous high disk I/O activity causing a general slowdown because of jbd2/sda2-8 process. jbd2/sda2-8 is making use of /dev/sda2, which is the 2nd partition of the first harddrive (IE: root partition). More info: using "iotop" the culprit appears to be "jbd2/sda1-8" making writes every second, which appears to be a kernel process associated with journaling on the ext4 filesystem, if my googling around is correct. I see "jbd2/sda2-8" appearing here every now and then, but certainly not every 3 seconds.. when idle, it appears about 1 or 2 times per minute. When I'm using the system, it appears more frequently. ATOP results: http://grabilla.com/02b14-8022db2e-4eb9-4f10-8e10-d65c49ad7530.png IOTOP results: http://grabilla.com/02b14-cf74b25d-4063-4447-9210-7d1b9b70e25b.png HTOP results: grabilla. com/02b14-ad8cad0e-89b0-46d3-849d-4fd515c1e690.png jbd2/sda2-8 is the processes I see with iotop making writes on disk even though it's not in use at all. Does someone has any idea how could I solve the high disk usage caused jbd2/sda2-8 process?

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • Windows 7 search does not return results from indexed folders

    - by Dilbert
    I am experiencing this issue over and over again and I just cannot seem to find the answer. It doesn't make sense, but search simply does not return results from folders that certainly have these files inside. It's weird that this technology exists for more than 5 years now (it could be added to Windows XP as an addon), and they still haven't got it right. My folder contains 10 image files with .png extensions. Two scenarios: Scenario 1: I exclude the folder using Indexing options. Search works. Scenario 2: I turn on indexing for this folder. Search does not work. Of course, Agent Ransack returns results every time. When I check Advanced options for the Indexing options inside control panel, .png files are checked in the File Types tab, using the "File Properties filter". What's the deal with this? [Edit] To clarify, this doesn't happen with all folders, but does with more than one. For the "problematic" folders, even *.* doesn't return a single result. I found some advice to clear the archive and readonly attributes for all files (doesn't make sense, but hey), but it didn't work. Indexing status in Control panel is: Indexing complete. 100,000 items indexed. Folder is included in the list. File types list contains the .png extension (although it doesn't work with any filter, not even *.*).

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • Nagios escalation debugging

    - by Oesor
    I'm having some issues with escalations happening properly and I'm not sure if it's because of my config or because the nagios binary is nonstandard and something may be broken. I've got little experience with nagios, and just want to make sure this is being set appropriately. Should the following config file definition allow the escalations to take over and increment the notification interval as expected? Is there somewhere else in the config files I should be looking at to figure out what's going on? I've enabled debug 32 in the config and it's simply spitting out 'Host notification will NOT be escalated.' for each notification. The configuration does pass the pre flight check with no issues, and reports that it's parsing the three host escalations in the config. # test host definition define host { host_name test alias test address 10.0.0.10 hostgroups test check_interval 0 retry_interval 1 max_check_attempts 2 flap_detection_enabled 0 icon_image windows.png icon_image_alt LOGO - Windows vrml_image windows.png statusmap_image windows.png action_url /info/host/275 check_period 24x7 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master check_command check_host_15!-H $HOSTADDRESS$ -t 3 -w 500.0,80% -c 1000.0,100% parents nagios notifications_enabled 1 notification_interval 3 notification_period 24x7 notification_options u,d,r use host-global } define hostescalation{ host_name test first_notification 3 last_notification 4 notification_interval 10 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 4 last_notification 5 notification_interval 30 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 5 last_notification 0 notification_interval 240 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master }

    Read the article

  • C#: Streaming an Audio file from a Server to a Client

    - by Andreas Grech
    I am currently writing an application that will allow a user to install some form of an application (maybe a Windows Service) that will open a port on it's PC and given a particular destination on the hard disk, will then be able to stream mp3 files. I will then have another application that will connect to the server (being the user's pc) and be able to browse the hosted data by connecting to that PC (remotely ofcourse) given the port, and stream mp3 files from the server to the application I have found some tutorials online but most of them are about File Servers in C# and they download allow you to download a whole file. What I want is to stream an mp3 file so that it starts playing when a certain number of bytes are download (ie, whilst it is being buffered) How do I go about in accomplishing such a task? What I need to know specifically is how to write this application (that I will turn into a Windows Service later on) that will listen on a specified port a stream files, so that I can then access the files by something of the sort: http://<serverip>:65000/acdc/wholelottarosie.mp3 and hopefully be able to stream that file in a WPF MediaPlayer. [Update] I was following this tutorial about building a file server and sending the file from the server to the client. Is what I have to do something of the sort? [Update] Currently reading this post: Play Audio from a Stream using C# and I think it looks very promising as to how I can play streamed files; but I still don't know how I can actually stream the files from the server.

    Read the article

  • Multithreading recommendation based on program description

    - by user260197
    I would like to describe some specifics of my program and get feedback on what the best multithreading model to use would be most applicable. I've spent a lot of time now reading on ThreadPool, Threads, Producer/Consumer, etc. and have yet to come to solid conclusions. I have a list of files (all the same format) but with different contents. I have to perform work on each file. The work consists of reading the file, some processing that takes about 1-2 minutes of straight number crunching, and then writing large output files at the end. I would like the UI interface to still be responsive after I initiate the work on the specified files. Some questions: What model/mechanisms should I use? Producer/Consumer, WorkPool, etc. Should I use a BackgroundWorker in the UI for responsiveness or can I launch the threading from within the Form as long as I leave the UI thread alone to continue responding to user input? How could I take results or status of each individual work on each file and report it to the UI in a thread safe way to give user feedback as the work progresses (there can be close to 1000 files to process) Update: Great feedback so far, very helpful. I'm adding some more details that are asked below: Output is to multiple independent files. One set of output files per "work item" that then themselves gets read and processed by another process before the "work item" is complete The work items/threads do not share any resources. The work items are processed in part using a unmanaged static library that makes use of boost libraries.

    Read the article

  • Git: Remove specific commit

    - by Joshua Cheek
    I was working with a friend on a project, and he edited a bunch of files that shouldn't have been edited. Somehow I merged his work into mine, either when I pulled it, or when I tried to just pick the specific files out that I wanted. I've been looking and playing for a long time, trying to figure out how to remove the commits that contain the edits to those files, it seems to be a toss up between revert and rebase, and there are no straightforward examples, and the docs assume I know more than I do. So here is a simplified version of the question: Given the following scenario, how do I remove commit 2? $ mkdir git_revert_test && cd git_revert_test $ git init Initialized empty Git repository in /Users/josh/deleteme/git_revert_test/.git/ $ echo "line 1" > myfile $ git add -A $ git commit -m "commit 1" [master (root-commit) 8230fa3] commit 1 1 files changed, 1 insertions(+), 0 deletions(-) create mode 100644 myfile $ echo "line 2" >> myfile $ git commit -am "commit 2" [master 342f9bb] commit 2 1 files changed, 1 insertions(+), 0 deletions(-) $ echo "line 3" >> myfile $ git commit -am "commit 3" [master 1bcb872] commit 3 1 files changed, 1 insertions(+), 0 deletions(-) The expected result is $ cat myfile line 1 line 3 Here is an example of how I have been trying to revert $ git revert 342f9bb Automatic revert failed. After resolving the conflicts, mark the corrected paths with 'git add <paths>' or 'git rm <paths>' and commit the result.

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • input type file alternative and file upload best practice

    - by Ioxp
    Background: I am working on a file upload page that will extend an existing web portal. This page will allow for an end user to upload files from there local computer to our network (the files will not be stored on the web server, rather a remote workstation). The end user will have the ability to view the data that they have submitted by hyper-linking the files that have been uploaded on this page. Question 1: Is there an ASP.net alternative to the <input type="file" runat="server" /> HTML tag? The reason for asking is i would rather use an image button and display the file as an asp label on the portal to keep with a consistent style. Question 2: So i understand that giving the end user the ability to upload files to the server and then turn around to show them the data that they posted poses a security threat. So far i am using the id.PostedFile.ContentType and the file extension to reject the data if its not an accepted format (i.e. "text/plain", "application/pdf", "application/vnd.ms-excel", or "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"). Also the location where the files are uploaded to has a sufficient amount of virus and malware protection and this is not a concern. What, from the C# point of view, additional steps should i take to ensure that the end user cant take advantage and compromise the system in regards to allowing them to upload files?

    Read the article

  • Building Web Application project using MSBuild from command line on 64-bit: missing targets file

    - by James Allen
    Building a solution containing a web application project using MSBuild from powershell like this: msbuild "/p:OutDir=$build_dir\" $solution_file Works fine for me on 32-bit, but on a 64-bit machine I am running into this error: error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. I am using Visual Studio 2008 and powershell v2. The problem has already been documented here and here. Basically on 64-bit install of VS, the Microsoft.WebApplication.targets needed by MSBuild is in the Program Files(x86) dir, not the Program Files dir, but MSBuild doesn't recognise this and so looks in the wrong place. The two solutions are not ideal: Manually copy the file on 64-bit from Program Files(x86) to Program Files. This is a poor solution - every dev will have to do this manually. Manually edit the csproj file so MSBuild looks in the right place. Again not ideal: I would rather not have to get everyone on 64bit to manually edit csproj files on every new project. e.g. <Import Project="$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)" Condition="Exists('$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)')" /> Ideally I want a way to tell MSBuild to import the target file form the right place from the command line but I can't work out how to do that. Any solutions?

    Read the article

  • mercurial .hgrc notify hook

    - by Eeyore
    Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit. .hgrc [paths] default = ssh://www.domain.com/repo/hg [ui] username = intern <[email protected]> ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [extensions] hgext.notify = [hooks] changegroup.notify = python:hgext.notify.hook incoming.notify = python:hgext.notify.hook [email] from = [email protected] [smtp] host = smtp.gmail.com username = [email protected] password = sure port = 587 tls = true [web] baseurl = http://dev/... [notify] sources = serve push pull bundle test = False config = /path/to/subscription/file template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n maxdiff = 300 Error Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg! , error code: -1 running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg!

    Read the article

  • Displaying ppt, doc, and xls in UIWebView doesn't work but pdf does

    - by slugolicious
    It looks like a few people on stackoverflow get this to work but their code isn't posted. I'm using [web loadData:data MIMEType:MIMEType textEncodingName:@"UTF-8" baseURL:nil]; where MIMEType is: @"application/vnd.ms-powerpoint" @"application/vnd.ms-word" @"application/vnd.ms-excel" (BTW, I've seen DOC files use mimetype @"application/msword" but the "vnd" version seems more appropriate. I tried both just in case.) I verified that my 'data' is correct. PDF and TXT files work. When the UIWebView displays PPT, DOC, or XLS files, it's blank. I put NSLOG statements in my UIWebViewDelegate calls. shouldStartLoadWithRequest:<NSMutableURLRequest about:blank> navType:5 webViewDidStartLoad: didFailLoadWithError:Error Domain=NSURLErrorDomain Code=100 UserInfo=0x122503a0 "Operation could not be completed. (NSURLErrorDomain error 100.)" didFailLoadWithError:Error Domain=WebKitErrorDomain Code=102 UserInfo=0x12253840 "Frame load interrupted" so obviously the load is failing, but why? If I change my mimetype to @"text/plain" for a PPT file, the UIWebView loads fine and displays unprintable characters, as expected. That's telling me the 'data' passed to loadData: is ok. Meaning my mimetypes are bad? And just to make sure my PPT, DOC, and XLS files are indeed ok to display, I created a simple html file with anchor tags to the files. When the html file is displayed in Safari on the iPhone, clicking on the files displays correctly in Safari. I tried to research the error code displayed in didFailLoadWithError (100) but all the documented error codes are negative and greater than 1000 (as seen in NSURLError.h). -(void)webView:(UIWebView *)webView didFailLoadWithError:(NSError *)error { NSLog(@"didFailLoadWithError:%@", error); }

    Read the article

  • Is a VCS appropriate for usage by a designer?

    - by iconiK
    I know that a VCS is absolutely critical for a developer to increase productivity and protect the code, no doubts about it. But what about a designer, using say, Photoshop (though it's not specific to any tools, just to make my point clearer). VCSs uses delta compression to store different versions of files. This works very well for code, but for images, that's a problem. Raster image files are binary formats, though vector image files are text (SVG comes to my mind) and pose to problem. The problem comes with .psd files (and any other image "source" file) - those can get pretty big and since I'm not familiar with the format, I'll consider them as binary files. How would a VCS work in this condition? The repository could be pretty darned big if the VCS server isn't able to diff the files efficiently (or worse, not at all) and over time this can become a really big pain when someone needs to check out the repository (or clone it if using a DVCS). Have any of you used a VCS for this purpose? How well does it work? I'm mostly interested in Mercurial, though this is a general situation that applies to any VCS.

    Read the article

  • uncompressing .zip file in linux [closed]

    - by Suren
    hi, I have a .zip file (It contains multiple files, ex: file1.txt file2.txt file3.txt.. n so on) in a directory. And my query is: How to extract the files from .zip archive to the very same directory and how to create the list of all the files extracted from .zip archive.** The extracted file name should be printed like this in the file named: file_list: file1.txt file2.txt file3.txt filen.txt I have tried the following command assuming that my .zip file name is "data.zip". unzip -qoj data.zip | unzip -ql data.zip > file_list I have used unzip -qoj data.zip to extract all the files in the same directory(quietly,overwrite,junk_path). When I try to insert -l with the first unzip command then the command doesn't extract the file in the current and only files are listed thats why I have to used unzip again after the first pipe(If I am making a mistake here let me know please). I get the following output Length Date Time Name -------- ---- ---- ---- 0 12-21-09 14:25 data/ 6148 12-21-09 14:25 data/.DS_Store 0 12-21-09 14:25 __MACOSX/ 0 12-21-09 14:25 __MACOSX/data/ 82 12-21-09 14:25 __MACOSX/data/._.DS_Store 82 12-11-09 13:59 data/file1.txt 120 12-11-09 13:59 data/file2.txt 166 12-11-09 13:59 data/file3.txt -------- ------- 6598 8 files How do I extract only file1.txt file2.txt file3.txt from this stdout? Is it possible to do this with linux command or I have to write a perl script for this? Thank you.

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, can any one tell how to tell svn that these files are to be deleted from repository through command line. i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • Visual C++ Testing problem

    - by JamesMCCullum
    Hi there I have installed VisualAssert and cFix. I have been using Visual Studio C++ and programming in CLI/C++. I have a working Chess Game Program that works perfectly by itself.....and I have been studying testing and have many examples(with tutorials) I have found on the net, that compile and run in Visual Studio..... But as soon as I try and implement those tests on my chess game......I get this problem.... This is what its telling me 1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------ 1>Compiling... 1>stdafx.cpp 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C4394: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol should not be marked with __declspec(allocate) 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C2393: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol cannot be allocated in segment '.CRT$XCX' 1>C:\Program Files\VisualAssert\include\cfixpe.h(244) : error C2440: 'initializing' : cannot convert from 'void (__cdecl *)(void)' to 'const CFIX_CRT_INIT_ROUTINE' 1> Address of a function yields __clrcall calling convention in /clr:pure and /clr:safe; consider using __clrcall in target type 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>Build log was saved at "file://c:\Users\james\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm" 1>ChessRound1 - 4 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas what I'm doing wrong? Im working with windows forms and have a heap of cpp source files. Any help would be appreciated. Thanks

    Read the article

  • semi dynamic cdn

    - by dwi kristianto
    i'm developing couple of websites using php (directory script, etc.) and wordpress as cms. i need to improve its performance, by using cdn for static files (css, js, images). the problem is, css and javascript files are generated on the fly. i did that due to yahoo and some expert advice to combine the files into one file. also changing basic color of css files. for the time being, i use couple of small vps but still its not fast enough. i already contact maxcdn and the support guy said that they dont have such kind of services. what i need is: a cdn that will serve the request from user/visitor and there's no file in local disk, the cdn will redirect/fetch it from another domain/server. in vps, it could be done easily using combination of .htaccess and php, but NOT in the cdn. most of cdn only support purely static files. is there any such cdn that will server semi-dynamic files?

    Read the article

  • Svn import with auto-props & pre-commit hook

    - by James Tisato
    My company's svn repo has a lot of MS Word docs in it. We've implemented a policy that all .doc files must have the svn:needs-lock property set to prevent parallel access on files that are hard to merge (we've also done this for xls, ppt, pdf etc.). We've implemented the policy by distributing a svn config with auto-props set appropriately for all relevant document types. We've also set up a pre-commit hook that checks that all added files of these types have the needs-lock property set (i.e. if they forget/are too lazy to update their svn config file, they won't be able to add any docs to the repo). The problem I'm having, however, is that the pre-commit hook fails when users try to import files into the repo, e.g. some users like to add files directly thru TortoiseSVN's Repo Browser, which effectively is an svn import. Through testing on other file types, I have seen that doing an import does in fact apply the auto-props listed in my config, but they don't seem to be applied at the point that the pre-commit hook runs. When importing .doc files, the hook fails, saying that the needs-lock property is missing. Is there really much difference between adding a single file to a working copy and committing it vs importing a file directly? Do we need to tailor our precommit hook in some way to cater for this scenario?

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >