Search Results

Search found 2938 results on 118 pages for 'async workflow'.

Page 80/118 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Managing Team Development on Shared Website

    - by stjowa
    I need to know the best way to manage team web-development on a shared server (hostgator). I have done some individual web development on a shared server in the past, and I have always setup SVN through SSH to have a pretty-nice development workflow (version control, quick-commits, work though eclipse/subclipse, etc). However, I also know that with that setup, I had to make some pretty-sophisticated post-commit hooks to export the repository to /public_html; and, therefore, making the repository code testable. This seems like a tedious and error-prone setup for an entire team. I would like to be able to: Easily test the latest code in the repository. Somewhat easily move the code in the repository to production. Use an IDE like eclipse/subclipse to easily work with the repository. With this in mind, does anyone know of a good version-control/repository setup for developing a website with a team of about 4-5 people? Thanks a lot.

    Read the article

  • Git: Help an SVN novice translate trunk/branch concepts to Git

    - by Jasconius
    So I am not much of a source control expert, I've used SVN for projects in the past. I have to use Git for a particular project (client supplied Git repo). My workflow is as such that I will be working on the files from two different computers, and often I need to check in changes that are unstable when I move from place to place so I can continue my work. What then occurs is when, say, the client goes to get the latest version, they will also download the unstable code. In SVN, you can address this by creating a trunk and use working branches, or use the trunk as the working version and create stable branches. What is the equivalent concept in Git, and is there a simple way to do this via Github?

    Read the article

  • Using emacs across many hosts

    - by mbac32768
    On a daily basis I: use multiple workstations running either Linux, Windows, or MacOS X edit files on additional Linux hosts that are not any of the workstations mentioned above The only common element here is that the internet connects all of these hosts: workstations and servers. I can keep all of the config files in sync on my workstations too and can run an X server on all of them. What's the right way of running emacs? I don't want to sacrifice any features. In my ideal world I can type 'emacs foo.txt' on a remote host and some magic happens via X forwarding to display the file in my workstation's existing emacs session. Non-solutions tramp: when I'm manipulating a remote host an editor is just part of my workflow. I need a terminal open so I can run other commands quickly. tramp is all wrong for this. ncurses emacs: sucks, I want the graphical kind If you don't have a positive answer to my question, please don't just guess. Thanks.

    Read the article

  • Controlling JIRA via Liferay

    - by Shayan
    I am trying to integrate JIRA into Liferay as a portlet. It seems to be that the best way to do that is using the IFrame portlet. However, there are a few additional things that I need to control. I am trying to model a workflow in JIRA, so depending on what happens within the Liferay portal, Liferay will need to be able to advance the status of tasks in JIRA. In addition, when a task is completed in JIRA, it needs to be able to call back to Liferay and create new users or groups if necessary. Is there some way to integrate JIRA and Liferay so that they can call each others APIs? Thanks.

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • How to skip "Loose Object" popup when running 'git gui'

    - by Michael Donohue
    When I run 'git gui' I get a popup that says This repository currently has approximately 1500 loose objects. It then suggests compressing the database. I've done this before, and it reduces the loose objects to about 250, but that doesn't suppress the popup. Compressing again doesn't change the number of loose objects. Our current workflow requires significant use of 'rebase' as we are transitioning from Perforce, and Perforce is still the canonical SCM. Once Git is the canonical SCM, we will do regular merges, and the loose objects problem should be greatly mitigated. In the mean time, I'd really like to make this 'helpful' popup go away.

    Read the article

  • How do you use pip, virtual_env and Frabric to handle deployement?

    - by e-satis
    What are your settings, your tricks, and above all, your workflow. These tools are great but they is still no best practices attached to their usage, so I don't know what is the most efficient way. Do you use pip bundles or always download? Do you set up Apache/Cherokee/MySQl by hand or do you have a script for than. Do you put everything in virtual_env and use --no-site-package? Do you use one virtual_env for several projects? What do you use Fabric for (which part of your deployment do you script)? Do you put your Fabric scripts in on the client or the server? How do you handle database and media file migration? Do you even need a build tool such as SCons? What are the steps of your deployment? How often do you perform each of them? etc.

    Read the article

  • How would you start automating my job? - Part 2

    - by Jurily
    (Followup to this question) After surviving the first wave of incoming shipments (9 hours of copy/paste), I now believe I have all the requirements. Here is the updated workflow: Monkey collects email attachments (4 Excel spreadsheets, 1 PDF) Monkey creates central database, does complex calculations (right now this is also an Excel spreadsheet) Monkey sends data to two bosses, who set the retail prices independently; first one to reply wins Monkey sends order form to our other warehouses, also Excel Monkey sends spreadsheets to VIP customers, carefully sanitized and formatted (4 different discount categories) Jurily enters the data into the accounting system. I've given up on automating this part, there's too much business logic involved, and the database is a pile of sh^W legacy My question: What technologies would you use for a quick and dirty solution? I'm mostly sold on C#, but coming from a Linux/C++ background, I'm horribly confused about my choices in Microsoft-land. For bonus points: How would you redesign the whole system from the ground up? P.S. in case you were wondering, my job title is System Administrator.

    Read the article

  • git hooks - regenerate a file and add it to each commit ?

    - by egarcia
    I'd like to automatically generate a file and add it to a commit if it has changed. Is it possible, if so, what hooks should I use? Context: I'm programming a CSS library. It has several CSS files, and at the end I want to produce a compacted and minimized version. Right now my workflow is: Modify the css files x.css and y.css git add x.css y.css Execute minimize.sh which parses all the css files on my lib, minimizes them and produces a min.css file git add min.css git commit -m 'modified x and y doing foo and bar' I would like to have steps 3 and 4 done automatically via a git hook. Is that possible? I've never used git hooks before. After reading the man page, I think I need to use the @pre-commit@ hook. But can I invoke git add min.css, or will I break the internet?

    Read the article

  • Adding WCF service reference adds DataContract types too

    - by Avi Shilon
    Hi everybody, I've used Visual Studio's Add Service Reference feature to add a service (actually it is a workflow service, created in WF4 RC1, but I don't think this makes any difference), and it also added the DataContracts that the service uses. At first this seemed fine, because All I've had in the DataContracts was simply properties, with no implementations. But now I've added code in the constructor of one data contracts that initializes creates an instance of one of the properties that exposes a list of other DCs, and when I've updated the service reference via VS (2010 RC1), the implementation was not updated. What should I do? Should I use my DCs instead of the ones created by VS or should I use the ones VS created? I've noticed that the properties in the VS-generated DCs contain some additional logic for checking equality in the setters and they also implement some interfaces too (like IExtensibleDataObject and INotifyPropertyChanged) which might get handy I guess in the future (I'm not knowledgeable at WCF). Thank you for your time folks, Avi

    Read the article

  • iPhone addition to already localized nibs

    - by user168610
    I've submitted an app to the store that has been localised into a number of languages. Now it's time to update a few things, so I've added a few new components to a xib or two and modify them as IBOutlets from my code. All works great in English, but when I change to a different language it appears that my nib additions haven't been propagated through the localised nibs that already existed. (I get an unrecognised selector on a UILabel I've added). What is the correct workflow for adding new items to an already localised nib? Should a modification to the top level .xib carry through to all localised versions? Or should I unlocalize, add new components and then localise all over again? Many thanks, Bryn

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

  • How well do (D)VCS cooperate with workflows involving several people editing files in the same direc

    - by frankster
    Imagine because of tradition that your team's preferred development method involved several people with a shared login, editing files on a build server using vim. [Note that there are well known issues to do with only one person being able to edit a file at once, people going away from their desk and leaving the file locked in vim, system builds/restarts requiring everybody to stop debugging while this occurs. This is not what the question is about] If source control was to be introduced without changing the workflow, would there be much benefit? I am guessing that the commit history won't be much use as it will contain all changes by everybody in big lumps. So it wouldn't really be possible to rewind individual changes apart from at a really big level.

    Read the article

  • Network Load Balancing and AnyCast Routing

    - by user126917
    Hi All can anyone advise on problems with the following? I am planning on installing the following setup on my estate: I have 2 sites that both have a large amount of users. Goals are to keep things simple for the users and to have automatic failover above the database level. Our Database will exist at the primary site and be async mirrored to the secondary site with manual failover procedures.The database generate sequential ID's so distributing it is not an option. I plan to site IIS boxes at both sites with all of the business logic on them and heavy operations. The connections to SQL will be lightweight and DB reads will be cached on IIS. On this layer I plan to use Windows network load balancing and have the same IP or IPs across all IIS boxes at both sites. This way there will be automatic failover and no single point of failure. Also users can have one web address regardless of which site they are in automatically be network load balanced to their local IIS. This is great but obviously our two sites are on different subnets and as this will be one IP address with most of our traffic we can't go broadcasting everything across the link between the sites. To solve this problem we plan to use AnyCast routing over our network layer to route the traffic to the most local box that is listening which will be defined by the network load balancing. Has anyone used this setup before? Can anyone think of any issues with this? Also some specifics I can't find anywhere at the moment. If my Windows box is assigned an IP and listening on that IP but network load balancing is not accepting specific traffic then will AnyCast route away from that? Also can I AnyCast on a socket level?

    Read the article

  • error handeling in informatca power center

    - by user223541
    i want to devlop a mapping for followinfg scenerio . I have a 1 source and 1 target and 1 error table.Target and Error tables have all fields that are present in source tables.But the data type o of all fieds for error table are varchar .Error table dont have integirty or foreign key and other constraints . Error table also have2 more fileds .Error no and error msg. Now when the workflow is executed if there is erro while inserting any record then that recored shold be moved to error table.Also the data base error code and error message should be logged in error no and error message in error tables fields as mentioned. How can i devlop such a mappng?Where can i find exaples of such mapping ?

    Read the article

  • implmenting 1 to n mapping for ORM c++

    - by karan
    I am writing a project where i need to implment a stripped down version of an ORM solution in c++. I am struck in implmenting 1-n relationships for the same. For instance, if following are the classes: class A { } class B { std::list _a_list; } I have provided load/save menthods for loading/saving to the db. Now, if i take the case of B : Say , for the following workflow : 1 entry from _a_list is removed 1 entry from _a_list is modified 1 entry is added to _a_list Now, i need to update the db using something like "b.save()". So, what would be the best way to save the changes,i.e, identify the additions, deletions and updations to _a_list.

    Read the article

  • How do you add folder level documentation to C# assemblies?

    - by Elijah
    Me: I'm a relative new-comer to the .NET platform. Problem In Java, you can add package level documentation to your project by creating a package-info.java or package.html file and storing in the package folder. How do I add equivalent documentation to my project in C# using Visual Studio 2010? Background I like to write documentation describing my motivations in the package/folder level context of the source code projects that I am working on. I have become very accustomed to this workflow in a variety of languages (specifically Java) and I believe that it is a good way to document my project.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Customise a control in dynamics crm

    - by webturner
    I've written code that can make a phone dial a number from a function call, that's done and dusted. What I would like to achieve is adding a Dial button to each phone number field on the forms in Dynamics CRM. Eventually this could also create a new phone record fill in the basic details and show it to the user to enter notes and an outcome for the phone call, and perhaps some other workflow bits to schedule the next call. Can I put a custom control on a standard form in place of the standard control. I'm assuming it would have to be an IFrame to an asp.net page, that pulls in the record id, and the field name, looks up the number to show in a text box, and passes the number to the DialNumber function. Hey presto... I assume its not going to be that easy... Has anyone tried this before, what's the process, what are the gotchas?

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • FFmpeg convert video w/ dropped frames, out of sync

    - by preahkumpii
    I recorded a video using Bandicam with the MJPEG encoder to get the least amount of lag. Now, I am trying to convert that massive file to a h264 avi using ffmpeg. I know there are dropped frames in the video stream...more than 100 in the first two minutes, which I assume is simply because Bandicam dropped some when it couldn't keep up. So, when I convert the file to h264, the video and audio are out of sync, and appear to be more and more out of sync as output video progresses. Here is my basic command in ffmpeg: ffmpeg -i "C:\...\input.avi" -vcodec libx264 -q 5 -acodec libmp3lame -ar 44100 -ac 2 -b:a 128k "C:\...\output.avi" I have tried EVERYTHING I can think of including: -itsoffset [-]00:00:01 Tried this before and after input file. This doesn't work because as the video progresses it becomes more and more out of sync. -async 1 Doesn't work. -vsync 1 Doesn't work, but it does show dropped frames being duplicated. Two inputs of same file with mapping using -map 0:0 -map 1:1. Doesn't work. The source plays just fine. Any ideas how to convert it with ffmpeg and keep the audio and video synced? Thanks.

    Read the article

  • Develop Rails app on Mac using with TextMate Tutorial

    - by Meltemi
    I've found this a somewhat dated tutorial for developing a Rails application on Leopard with XCode. Wondering if anyone knows of a more up-to-date (ideally Mac based) tutorial that uses TextMate (or XCode if it's indeed preferred, or even just the command line). TextMate is appealing to me but wondering how to work scrips like ruby script/generate controller etc. into workflow or if switching between command-line & TextMate is standard operating procedure... If it matters were have Snow Leopard clients and Leopard Servers at our disposal. Thanks..

    Read the article

  • How Do I Stop NFS Clients from Using All of the NFS Server's Resources?

    - by Ken S.
    I have a v4 NFS server running on Ubuntu 12.04LTS. It is the main repository for the web assets that four external nginx webservers mount to serve up to site visitors. These client servers connect to it via a read-only mount. Each of these RO servers has this displayed when I check the mounts: 10.0.0.90:/assets on /var/www/assets type nfs4 (ro,addr=10.0.0.90,clientaddr=0.0.0.0) The NFS master's /etc/exports file contains entries like this for each server: /mnt/lvm-ext4 10.0.0.40(ro,fsid=0,insecure,no_subtree_check,async) The problem that I'm seeing is that these clients are eventually utilizing all the RAM on the NFS server and causing it to crash. If I do a watch free -m I can watch the used memory creep up until it's used and then see the free buffers/cache entry creep down to near zero before the server eventually locks up requiring a reboot. There is some sort of memory leak somewhere that is causing this, and the optimal solution would be to find it and fix it, but in the meantime I need to find a way to have the NFS server protect itself from connected clients using all it's RAM. There must be some sort of setting that limits the resources the clients can use, but I can't seem to find it. I've tried adjusting the values for rsize and wsize but they don't seem to help or be related. Thanks for any tips.

    Read the article

  • Change value before sending an email in Microsoft Dynamic CRM

    - by jk
    Hi I have created one Email Template in Dynamic CRM 4. The template look like as following Dear {!User:Full Name} you received one assignment. output of the email as following Dear John Smith you received one assignment. now I need to change the value as just John. we do not have attribute like first name and last name. but when we send an email it only take first word of the string. Can anybody please tell me how can i achieve this functionality Without writing any workflow or plug ins? Thanks

    Read the article

  • Simple WCF question.

    - by bearrito
    So I am learning WCF and I have run across an issue that I believe has to do with Instance Control/State but I am not sure. Workflow is as follows, Basic client/server paradigm. The client calls a Method RetrieveBusinessObjects(criteria) and the server calls the datalayer and then puts them in an IList on the server side. It does not return this list to the calling Client. The client would then call a method say DisplayBusinessObjects() which would retrieve the IList from the Server, serialize them, bring them across the wire and display them. If i try this using the WCFTestClient it works. If I run it from an actual client then I get back an BusinessObject[] of size 0. Which to me indicates I had no objects to return. Is that a state mgmt issue or am I missing something?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >